Compare commits

...

79 Commits

Author SHA1 Message Date
006f5e722d to merge with previous commit on main 2023-03-03 11:25:25 +01:00
949cc2199b Add support for devince binding
In some cases we need to use block devices directly (Physical volume
storage).

This new variable allows the user to mount devices with the --device
option for flatcar baremetal.

How to use :

add a list of objects like this to the cluster definition :

worker_bind_devices = [
  { source = "/dev/sdb" target = "/dev/sdb" },
  { source = "/dev/sdc" target = "/dev/sdd" },
  { source = "/dev/sdd" target = "/dev/sdd" }
]

This will bind the source device on the target directory in evry node.
2023-03-03 10:41:56 +01:00
76ebc08fd2 Update Kubernetes from v1.26.1 to v1.26.2
* https://github.com/poseidon/terraform-render-bootstrap/pull/345
2023-03-01 17:13:16 -08:00
86e8484e0a Change bare-metal workers variable to optional
* To accompany the restructure of the bare-metal modules to
allow discrete workers to be defined and attached to a cluster
(#1295), the `workers` variable (older way, used for defining
homogeneous workers inline) should be optional and default
to an empty list
* Add docs covering inline vs discrete metal workers

Fix #1301
2023-03-01 14:37:47 -08:00
cf20e686c0 Bump mkdocs-material from 9.0.13 to 9.0.15
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.0.13 to 9.0.15.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.0.13...9.0.15)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-01 13:50:37 -08:00
420ddd2154 Bump mkdocs-material from 9.0.12 to 9.0.13
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.0.12 to 9.0.13.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.0.12...9.0.13)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-20 10:44:45 -08:00
435b3d4c88 Bump mkdocs-material from 9.0.11 to 9.0.12
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.0.11 to 9.0.12.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.0.11...9.0.12)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-15 09:44:54 -08:00
f3c327007d Update flannel from v0.20.2 to v0.21.1
* https://github.com/flannel-io/flannel/releases/tag/v0.21.1
2023-02-09 09:56:25 -08:00
406fb444f0 Update Cilium from v1.12.5 to v1.12.6
* https://github.com/cilium/cilium/releases/tag/v1.12.6
2023-02-09 09:45:40 -08:00
1caea3388c Restructure bare-metal module to use a worker submodule
* Add an internal `worker` module to the bare-metal module, to
allow individual bare-metal machines to be defined and joined
to an existing bare-metal cluster. This is similar to the "worker
pools" modules for adding sets of nodes to cloud (AWS, GCP, Azure)
clusters, but on metal, each piece of hardware is potentially
unique

New: Using the new `worker` module, a Kubernetes cluster can be defined
without any `workers` (i.e. just a control-plane). Use the `worker`
module to define each piece machine that should join the bare-metal
cluster and customize it in detail. This style is quite flexible and
suited for clusters with hardware that varies quite a bit.

```tf
module "mercury" {
  source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.26.2"

  # bare-metal
  cluster_name            = "mercury"
  matchbox_http_endpoint  = "http://matchbox.example.com"
  os_channel              = "flatcar-stable"
  os_version              = "2345.3.1"

  # configuration
  k8s_domain_name    = "node1.example.com"
  ssh_authorized_key = "ssh-rsa AAAAB3Nz..."

  # machines
  controllers = [{
    name   = "node1"
    mac    = "52:54:00:a1:9c:ae"
    domain = "node1.example.com"
  }]
}
```

```tf
module "mercury-node1" {
  source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes/worker?ref=v1.26.2"

  cluster_name = "mercury"

  # bare-metal
  matchbox_http_endpoint  = "http://matchbox.example.com"
  os_channel              = "flatcar-stable"
  os_version              = "2345.3.1"

  # configuration
  name               = "node2"
  mac                = "52:54:00:b2:2f:86"
  domain             = "node2.example.com"
  kubeconfig         = module.mercury.kubeconfig
  ssh_authorized_key = "ssh-rsa AAAAB3Nz..."

  # optional
  snippets       = []
  node_labels    = []
  node_tains     = []
  install_disk   = "/dev/vda"
  cached_install = false
}
```

For clusters with fairly similar hardware, you may continue to
define `workers` directly within the cluster definition. This
reduces some repetition, but is not quite as flexible.

```tf
module "mercury" {
  source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.26.1"

  # bare-metal
  cluster_name            = "mercury"
  matchbox_http_endpoint  = "http://matchbox.example.com"
  os_channel              = "flatcar-stable"
  os_version              = "2345.3.1"

  # configuration
  k8s_domain_name    = "node1.example.com"
  ssh_authorized_key = "ssh-rsa AAAAB3Nz..."

  # machines
  controllers = [{
    name   = "node1"
    mac    = "52:54:00:a1:9c:ae"
    domain = "node1.example.com"
  }]
  workers = [
    {
      name   = "node2",
      mac    = "52:54:00:b2:2f:86"
      domain = "node2.example.com"
    },
    {
      name   = "node3",
      mac    = "52:54:00:c3:61:77"
      domain = "node3.example.com"
    }
  ]
}
```

Optional variables `snippets`, `worker_node_labels`, and
`worker_node_taints` are still defined as a map from machine name
to a list of snippets, labels, or taints respectively to allow some
degree of per-machine customization. However, fields like
`install_disk`, `kernel_args`, `cached_install` and future options
will not be designed this way. Instead, if your machines vary it
is recommended to use the new `worker` module to define each node
2023-02-09 08:29:28 -08:00
d04d88023d Bump mkdocs-material from 9.0.6 to 9.0.11
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.0.6 to 9.0.11.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.0.6...9.0.11)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-07 09:12:43 -08:00
a205922d06 Update Calico from v3.24.5 to v3.25.0
* https://github.com/poseidon/terraform-render-bootstrap/pull/342
2023-01-24 08:29:08 -08:00
b5ba65d4c2 Update etcd from v3.5.6 to v3.5.7
* https://github.com/etcd-io/etcd/releases/tag/v3.5.7
2023-01-24 08:29:08 -08:00
e696fd2b22 Bump mkdocs-material from 9.0.5 to 9.0.6
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.0.5 to 9.0.6.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.0.5...9.0.6)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-01-23 09:08:20 -08:00
3ff9b792ca Bump pymdown-extensions from 9.9.1 to 9.9.2
Bumps [pymdown-extensions](https://github.com/facelessuser/pymdown-extensions) from 9.9.1 to 9.9.2.
- [Release notes](https://github.com/facelessuser/pymdown-extensions/releases)
- [Commits](https://github.com/facelessuser/pymdown-extensions/compare/9.9.1...9.9.2)

---
updated-dependencies:
- dependency-name: pymdown-extensions
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-01-23 09:02:30 -08:00
c4f1d2d1c8 Bump pymdown-extensions from 9.9 to 9.9.1
Bumps [pymdown-extensions](https://github.com/facelessuser/pymdown-extensions) from 9.9 to 9.9.1.
- [Release notes](https://github.com/facelessuser/pymdown-extensions/releases)
- [Commits](https://github.com/facelessuser/pymdown-extensions/compare/9.9...9.9.1)

---
updated-dependencies:
- dependency-name: pymdown-extensions
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-01-19 08:54:46 -08:00
a1d7b5cd1e Bump mkdocs-material from 9.0.3 to 9.0.5
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.0.3 to 9.0.5.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.0.3...9.0.5)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-01-19 08:51:56 -08:00
e7591030e0 Remove Twitter badge from README, we're on the Fediverse now 2023-01-19 08:43:49 -08:00
f2bf5ac3fb Update Kubernetes from v1.26.0 to v1.26.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1261
2023-01-19 08:27:56 -08:00
9cd1c5b17a Bump mkdocs-material from 9.0.0 to 9.0.3
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.0.0 to 9.0.3.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.0.0...9.0.3)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-01-11 20:48:11 -08:00
d6f739dedb Bump mkdocs-material from 8.5.11 to 9.0.0
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 8.5.11 to 9.0.0.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Upgrade guide](https://github.com/squidfunk/mkdocs-material/blob/master/docs/upgrade.md)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/8.5.11...9.0.0)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-01-02 22:29:06 -08:00
6bb7a36cf2 Bump pygments from 2.13.0 to 2.14.0
Bumps [pygments](https://github.com/pygments/pygments) from 2.13.0 to 2.14.0.
- [Release notes](https://github.com/pygments/pygments/releases)
- [Changelog](https://github.com/pygments/pygments/blob/master/CHANGES)
- [Commits](https://github.com/pygments/pygments/compare/2.13.0...2.14.0)

---
updated-dependencies:
- dependency-name: pygments
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-01-02 22:20:59 -08:00
0afe9d65ed Update Cilium from v1.12.4 to v1.12.5
* https://github.com/cilium/cilium/releases/tag/v1.12.5
2022-12-21 08:13:35 -08:00
11e540000f Update CHANGES to reiterate Terraform Module Registry deprecation
* Terraform supports sourcing modules from either Git repos or from
their own hosted Terraform Module Registry, introduced a few years ago
* Typhoon docs have always shown using Git-based module sources, not
the Terraform Module Registry. For example, module usage should be
`source = "git::https://github.com/poseidon/typhoon/...` not
`source = poseidon/kubernetes/...`
* Typhoon published Flatcar Linux modules (CoreOS Container Linux at the time)
to Terraform Module Registry, but the approach has a number of drawbacks
for publishers and for users.
  * Terraform's Module Registry requires subtree mirroring Typhoon to special
  terraform-platform-kubernetes repos. This distorts Git history,
  requires special automation, and the registry's naming requirements
  don't allow us to publish our full matrix of modules (Fedora CoreOS
  and Flatcar Linux, across AWS, Azure, GCP, on-prem, and DigitalOcean)
  * Terraform's Module Registry only supports release versions (no commit SHAs
  or forks)
* Ultimately, the Terraform Module Registry limits user flexibility, has
tedious publishing constraints, and introduces centralization where the
current decentralized Git-based approach is simpler and more featureful

Note: This does not affect Terraform _Providers_ like `poseidon/matchbox`
or `poseidon/ct`. For Terraform providers, Terraform's centralized
platform eases provider plugin installation and provides value
2022-12-10 10:00:22 -08:00
d6cbcf9f96 Update Kubernetes from v1.26.0-rc.1 to v1.26.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260
2022-12-08 08:47:24 -08:00
ce52a2cd35 Update Nginx Ingress and monitoring addon components
* Update ingress-nginx, Prometheus, node-exporter, and
  kube-state-metrics
2022-12-05 09:38:38 -08:00
bd9a908125 Bump mkdocs-material from 8.5.10 to 8.5.11
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 8.5.10 to 8.5.11.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/8.5.10...8.5.11)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-12-05 09:35:43 -08:00
0dc8740c77 Update Kubernetes from v1.26.0-rc.0 to v1.26.0-rc.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260-rc1
2022-12-05 09:31:45 -08:00
a9b12b6bca Update Kubernetes from v1.25.4 to v1.26.0-rc.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260-rc0
2022-11-30 08:47:40 -08:00
d419c58ab1 Add Equinix to the sponsors list
* Thank you Equinix!
2022-11-30 00:30:39 -08:00
da76d32aba Migrate AWS launch configurations to launch templates
* Same features, but AWS will soon require launch templates
* Starting Dec 31, 2022 AWS will not add new instance types
(e.g. graviton 4) to launch configuration support

Rel: https://aws.amazon.com/blogs/compute/amazon-ec2-auto-scaling-will-no-longer-add-support-for-new-ec2-features-to-launch-configurations/
2022-11-30 00:26:03 -08:00
f0e5982b3c Bump pymdown-extensions from 9.8 to 9.9
Bumps [pymdown-extensions](https://github.com/facelessuser/pymdown-extensions) from 9.8 to 9.9.
- [Release notes](https://github.com/facelessuser/pymdown-extensions/releases)
- [Commits](https://github.com/facelessuser/pymdown-extensions/compare/9.8...9.9)

---
updated-dependencies:
- dependency-name: pymdown-extensions
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-29 08:43:17 -08:00
a8990b3045 Fix flannel container image registry location
* https://github.com/poseidon/terraform-render-bootstrap/pull/336
2022-11-23 16:18:30 -08:00
f597f7cda3 Update Prometheus and Grafana addons 2022-11-23 11:06:03 -08:00
b4857c123e Update flannel from v0.15.1 to v0.20.1
* https://github.com/flannel-io/flannel/releases/tag/v0.20.1
2022-11-23 11:03:29 -08:00
50bffaae8f Update etcd from v3.5.5 to v3.5.6 in CHANGES.md 2022-11-23 11:01:24 -08:00
a193762eed Update etcd from v3.5.5 to v3.5.6
* https://github.com/etcd-io/etcd/releases/tag/v3.5.6
2022-11-23 10:59:17 -08:00
adf33df99b Update Cilium from v1.12.3 to v1.12.4
* https://github.com/cilium/cilium/releases/tag/v1.12.4
2022-11-23 10:58:27 -08:00
29a005b7b4 Update CHANGELOG links 2022-11-17 07:55:58 -08:00
ccebc2313d Bump mkdocs-material from 8.5.8 to 8.5.10
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 8.5.8 to 8.5.10.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/8.5.8...8.5.10)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-14 18:29:11 -08:00
1f86592d13 Bump pymdown-extensions from 9.7 to 9.8
Bumps [pymdown-extensions](https://github.com/facelessuser/pymdown-extensions) from 9.7 to 9.8.
- [Release notes](https://github.com/facelessuser/pymdown-extensions/releases)
- [Commits](https://github.com/facelessuser/pymdown-extensions/compare/9.7...9.8)

---
updated-dependencies:
- dependency-name: pymdown-extensions
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-14 18:26:34 -08:00
6a521257d0 Link to new Mastodon accounts
* @typhoon@fosstodon.org will announce Typhoon releases, like the
@typhoon8s Twitter account does today
* @poseidon@fosstodon.org will announce Poseidon Labs news and
general projects, like the @poseidonlabs Twitter account does today
2022-11-10 09:48:30 -08:00
26dbc7e91d Update Kubernetes from v1.25.3 to v1.25.4
* Update Calico from v3.24.3 to v3.24.5
* Update Prometheus and Grafana addons
2022-11-10 09:42:21 -08:00
de668e696a Bump mkdocs-material from 8.5.7 to 8.5.8
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 8.5.7 to 8.5.8.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/8.5.7...8.5.8)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-07 09:45:38 -08:00
d3b2217444 Bump mkdocs from 1.4.1 to 1.4.2
Bumps [mkdocs](https://github.com/mkdocs/mkdocs) from 1.4.1 to 1.4.2.
- [Release notes](https://github.com/mkdocs/mkdocs/releases)
- [Commits](https://github.com/mkdocs/mkdocs/compare/1.4.1...1.4.2)

---
updated-dependencies:
- dependency-name: mkdocs
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-07 09:36:20 -08:00
937acc4b5a Re-enable Graceful Node Shutdown feature
* Kubelet GracefulNodeShutdown works, but only partially handles
gracefully stopping the Kubelet. The most noticeable drawback
is that Completed Pods are left around
* Use a project like poseidon/scuttle or a similar systemd unit
as a snippet to add drain and/or delete behaviors if desired
* This reverts commit 1786e34f33.

Rel:

* https://www.psdn.io/posts/kubelet-graceful-shutdown/
* https://github.com/poseidon/scuttle
2022-11-02 20:49:01 -07:00
b0a6dc8115 Bump mkdocs-material from 8.5.6 to 8.5.7
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 8.5.6 to 8.5.7.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/8.5.6...8.5.7)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-25 19:27:41 -07:00
420ff6ff04 Bump pymdown-extensions from 9.6 to 9.7
Bumps [pymdown-extensions](https://github.com/facelessuser/pymdown-extensions) from 9.6 to 9.7.
- [Release notes](https://github.com/facelessuser/pymdown-extensions/releases)
- [Commits](https://github.com/facelessuser/pymdown-extensions/compare/9.6...9.7)

---
updated-dependencies:
- dependency-name: pymdown-extensions
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-25 17:50:48 -07:00
9b733d79c7 Update Calico v3.24.2 to v3.24.3
* https://github.com/projectcalico/calico/releases/tag/v3.24.3
* Add patch to allow Kubelet kubeconfig to drain nodes if desired
in addition to just deleting them in shutdown integrations. See
https://github.com/poseidon/terraform-render-bootstrap/pull/330
2022-10-23 22:00:15 -07:00
35a9e22b1f Update Calico from v3.24.1 to v3.24.2
* https://github.com/projectcalico/calico/releases/tag/v3.24.2
2022-10-20 09:28:19 -07:00
0f38a6d405 Remove defunct delete-node.service from worker nodes
* delete-node.service used to be used to remove nodes from the
cluster on shutdown, but its long since it last worked properly
* If there is still a desire for this concept, it can be added
with a custom snippet and with a better systemd unit
2022-10-20 08:43:48 -07:00
a535581ef2 Remove unused Wants=network.target from etcd-member
* network.target is a passive unit that's not actually pulled
in by units requiring or wanting it, its only used for shutdown
ordering
> "Services using the network should ... avoid any Wants=network.target or even Requires=network.target"

Rel: https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/
2022-10-20 08:32:55 -07:00
08d13e7215 Improve release notes slightly with links 2022-10-20 08:30:30 -07:00
3ff2d38fa5 Update Cilium from v1.12.2 to v1.12.3
* https://github.com/cilium/cilium/releases/tag/v1.12.3
2022-10-17 17:25:23 -07:00
d6d8eb8d79 Bump mkdocs from 1.4.0 to 1.4.1
Bumps [mkdocs](https://github.com/mkdocs/mkdocs) from 1.4.0 to 1.4.1.
- [Release notes](https://github.com/mkdocs/mkdocs/releases)
- [Commits](https://github.com/mkdocs/mkdocs/compare/1.4.0...1.4.1)

---
updated-dependencies:
- dependency-name: mkdocs
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-17 16:56:19 -07:00
f04e1d25a8 Add Flatcar Linux ARM64 support on Azure
* Kinvolk now publishes Flatcar Linux images for ARM64
* For now, amd64 image must specify a plan while arm64 images
must NOT specify a plan due to how Kinvolk publishes.

Rel: https://github.com/flatcar/Flatcar/issues/872
2022-10-17 08:36:57 -07:00
b68f8bb2a9 Switch Azure Fedora CoreOS default worker type
* Change default Azure worker_type from Standard_DS1_v2 to Standard_D2as_v5
  * Get 2 VCPU, 7 GiB, 12500Mbps (vs 1 VCPU, 3.5GiB, 750 Mbps)
  * Small increase in pay-as-you-go price ($53.29 -> $62.78)
  * Small increase in spot price ($5.64/mo -> $7.37/mo)
  * Change from Intel to AMD EPYC (`D2as_v5` cheaper than `D2s_v5`)

Rel:

* https://github.com/poseidon/typhoon/pull/1248
* https://learn.microsoft.com/en-us/azure/virtual-machines/dasv5-dadsv5-series#dasv5-series
* https://learn.microsoft.com/en-us/azure/virtual-machines/dv2-dsv2-series#dsv2-series
2022-10-13 21:23:57 -07:00
651151805d Update Kubernetes v1.25.2 to v1.25.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1253
2022-10-13 21:02:39 -07:00
8d2c8b8db6 Switch to Flatcar Azure gen2 images and change worker type
* Switch from Azure Hypervisor generation 1 to generation 2
* Change default Azure `worker_type` from Standard_DS1_v2 to Standard_D2as_v5
  * Get 2 VCPU, 7 GiB, 12500Mbps (vs 1 VCPU, 3.5GiB, 750 Mbps)
  * Small increase in pay-as-you-go price ($53.29 -> $62.78)
  * Small increase in spot price ($5.64/mo -> $7.37/mo)
  * Change from Intel to AMD EPYC (`D2as_v5` cheaper than `D2s_v5`)

Notes: Azure makes you accept terms for each plan:

```
az vm image terms accept --publish kinvolk --offer flatcar-container-linux-free --plan stable-gen2
```

Rel:

* https://learn.microsoft.com/en-us/azure/virtual-machines/dasv5-dadsv5-series#dasv5-series
* https://learn.microsoft.com/en-us/azure/virtual-machines/dv2-dsv2-series#dsv2-series
2022-10-13 09:57:52 -07:00
675ac63159 Remove note about not supporting ARM64 with Calico CNI
* Calico v3.22.0 introduced multi-arch container images so Typhoon's
ARM64 support has allowed choosing Calico CNI since Typhoon v1.23.5
2022-10-11 23:21:02 -07:00
b4c8b1729c Switch addons images from k8s.gcr.io to registry.k8s.io
* Switch addon manifests to use the new Kubernetes image registry

Rel:

* https://github.com/poseidon/typhoon/pull/1206
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#moved-container-registry-service-from-k8sgcrio-to-registryk8sio
2022-10-09 16:14:28 -07:00
e82241169a Update Prometheus from v2.38.0 to v2.39.1
* https://github.com/prometheus/prometheus/releases/tag/v2.39.1
2022-10-09 16:12:35 -07:00
ffe4929ff6 Bump mkdocs-material from 8.5.3 to 8.5.6
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 8.5.3 to 8.5.6.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/8.5.3...8.5.6)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-09 14:44:06 -07:00
88b3925318 Bump pymdown-extensions from 9.5 to 9.6
Bumps [pymdown-extensions](https://github.com/facelessuser/pymdown-extensions) from 9.5 to 9.6.
- [Release notes](https://github.com/facelessuser/pymdown-extensions/releases)
- [Commits](https://github.com/facelessuser/pymdown-extensions/compare/9.5...9.6)

---
updated-dependencies:
- dependency-name: pymdown-extensions
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-03 15:34:37 -07:00
29876dc85a Bump mkdocs from 1.3.1 to 1.4.0
Bumps [mkdocs](https://github.com/mkdocs/mkdocs) from 1.3.1 to 1.4.0.
- [Release notes](https://github.com/mkdocs/mkdocs/releases)
- [Commits](https://github.com/mkdocs/mkdocs/compare/1.3.1...1.4.0)

---
updated-dependencies:
- dependency-name: mkdocs
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-03 14:49:24 -07:00
7e29e35457 Bump mkdocs-material from 8.5.2 to 8.5.3
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 8.5.2 to 8.5.3.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/8.5.2...8.5.3)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-09-28 08:57:03 -07:00
3ee462a24c Update Kubernetes from v1.25.1 to v1.25.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1252
2022-09-22 08:15:30 -07:00
f833b7205d Sync recommended Terraform providers in docs 2022-09-20 08:30:15 -07:00
558e293f78 Update Nginx Ingress and Grafana addons 2022-09-20 08:28:30 -07:00
90782ea820 Remove workaround for preventing search . propagation
* Kubelet v1.25.1 has the fix https://github.com/kubernetes/kubernetes/pull/112157
2022-09-19 22:37:02 -07:00
8dc7cc614c Bump mkdocs-material from 8.4.4 to 8.5.2
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 8.4.4 to 8.5.2.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/8.4.4...8.5.2)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-09-19 22:16:32 -07:00
74d4d56dbd Remove workaround for v1.25.0 ConfigMap rendering issue
* LocalStorageCapacityIsolationFSQuotaMonitoring was reverted back to
alpha in v1.25.1, so we don't need to explicitly disable it anymore

Rel: https://github.com/kubernetes/kubernetes/issues/112081
2022-09-19 09:10:24 -07:00
5abe84b520 Update etcd from v3.5.4 to v3.5.5
* https://github.com/etcd-io/etcd/blob/main/CHANGELOG/CHANGELOG-3.5.md#v355
2022-09-15 09:01:45 -07:00
951209d113 Update Cilium from v1.12.1 to v1.12.2
* https://github.com/cilium/cilium/releases/tag/v1.12.2
2022-09-15 08:28:37 -07:00
09751cc0e8 Update Kubernetes from v1.25.0 to v1.25.1
* https://github.com/kubernetes/kubernetes/releases/tag/v1.25.1
2022-09-15 08:23:22 -07:00
c14300f0be Update Calico from v3.23.3 to v3.24.1
* https://github.com/projectcalico/calico/releases/tag/v3.24.1
2022-09-14 08:09:38 -07:00
37de9ca2ae Bump mkdocs-material from 8.4.2 to 8.4.4
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 8.4.2 to 8.4.4.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/8.4.2...8.4.4)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-09-14 07:42:59 -07:00
1786e34f33 Revert Graceful Node Shutdown feature
* Disable Kubelet Graceful Node Shutdown on worker nodes (enabled in
Kubernetes v1.25.0 https://github.com/poseidon/typhoon/pull/1222)
* Graceful node shutdown shutdown allows 30s for critical pods to
shutdown and 15s for regular pods to shutdown before releasing the
inhibitor lock to allow the host to shutdown
* Unfortunately, both pods and the node are shutdown at the same
time at the end of the 45s period without further configuration
options. As a result, regular pods and the node are shutdown at the
same time. In practice, enabling this feature leaves Error or Completed
pods in kube-apiserver state until manually cleaned up. This feature
is not ready for general use
* Fix issue where Error/Completed pods are accumulating whenever any
node restarts (or auto-updates), visible in kubectl get pods
* This issue wasn't apparent in initial testing and seems to only
affect non-critical pods (due to critical pods being killed earlier)
But its very apparent on our real clusters

Rel: https://github.com/kubernetes/kubernetes/issues/110755
2022-09-10 14:58:44 -07:00
5f612c82e2 Update kube-state-metrics and Grafana addons 2022-09-01 08:58:32 -07:00
99 changed files with 1199 additions and 637 deletions

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
site/

View File

@ -4,6 +4,126 @@ Notable changes between versions.
## Latest
* Kubernetes [v1.26.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1262)
* Update Cilium from v1.12.5 to [v1.12.6](https://github.com/cilium/cilium/releases/tag/v1.12.6)
* Update flannel from v0.20.2 to [v0.21.2](https://github.com/flannel-io/flannel/releases/tag/v0.21.2)
### Bare-Metal
* Add a `worker` module to allow customizing individual worker nodes ([#1295](https://github.com/poseidon/typhoon/pull/1295))
## v1.26.1
* Kubernetes [v1.26.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1261)
* Update etcd from v3.5.6 to [v3.5.7](https://github.com/etcd-io/etcd/releases/tag/v3.5.7)
* Update Cilium from v1.12.4 to [v1.12.5](https://github.com/cilium/cilium/releases/tag/v1.12.5)
* Update Calico from v3.24.5 to [v3.25.0](https://github.com/projectcalico/calico/releases/tag/v3.25.0)
* Update CoreDNS from v1.9.3 to [v1.9.4](https://github.com/poseidon/terraform-render-bootstrap/pull/341)
## v1.26.0
* Kubernetes [v1.26.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260)
* Update etcd from v3.5.5 to [v3.5.6](https://github.com/etcd-io/etcd/releases/tag/v3.5.6)
* Update Cilium from v1.12.3 to [v1.12.4](https://github.com/cilium/cilium/releases/tag/v1.12.4)
* Update flannel from v0.15.1 to [v0.20.2](https://github.com/flannel-io/flannel/releases/tag/v0.20.2)
* Reminder: Modules are no longer published to the [Terraform Module Registry](https://registry.terraform.io/search/modules?q=poseidon) ([#1282](https://github.com/poseidon/typhoon/pull/1282))
* See [#1282](https://github.com/poseidon/typhoon/pull/1282) and [v1.25.4](https://github.com/poseidon/typhoon/releases/tag/v1.25.4) for details
### AWS
* Migrate AWS launch configurations to launch templates ([#1275](https://github.com/poseidon/typhoon/pull/1275))
* Starting Dec 31, 2022 AWS won't add new instance types/families to launch configurations
### Addons
* Update ingress-nginx from v1.3.1 to [v1.5.1](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.5.1)
* Update Prometheus from v2.40.1 to [v2.40.5](https://github.com/prometheus/prometheus/releases/tag/v2.40.5)
* Update node-exporter from v1.3.1 to [v1.5.0](https://github.com/prometheus/node_exporter/releases/tag/v1.5.0)
* Update kube-state-metrics from v2.6.0 to [v2.7.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.7.0)
* Update Grafana from v9.2.4 to [v9.3.1](https://github.com/grafana/grafana/releases/tag/v9.3.1)
## v1.25.4
* Kubernetes [v1.25.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1254)
* Update Calico from v3.24.1 to [v3.24.5](https://github.com/projectcalico/calico/releases/tag/v3.24.5)
* Allow Kubelet kubeconfig to drain nodes, if desired ([#330](https://github.com/poseidon/terraform-render-bootstrap/pull/330))
* Re-enable Kubelet Graceful Node Shutdown ([#1261](https://github.com/poseidon/typhoon/pull/1261))
* Introduce companion project [poseidon/scuttle](https://github.com/poseidon/scuttle)
* Link to new Mastodon account for release announcements
* [@typhoon@fosstodon.org](https://fosstodon.org/@typhoon)
* [@poseidon@fosstodon.org](https://fosstodon.org/@poseidon)
* Deprecate publishing to the [Terraform Module Registry](https://registry.terraform.io/search/modules?q=poseidon)
* Typhoon docs have always shown using Git-based module sources, not the Terraform Module Registry
* Module usage should be `source = "git::https://github.com/poseidon/typhoon/...` not `source = poseidon/kubernetes/...`
* Terraform's Module Registry requires subtree mirroring typhoon to special terraform-platform-kubernetes repos, only supports release versions (no commit SHAs or forks), only ever contained Flatcar Linux modules (not Fedora CoreOS) for historical reasons
* Note, this does not affect Terraform Providers like `poseidon/matchbox` or `poseidon/ct`, the registry works well for providers
### Fedora CoreOS
* Remove unused `Wants=network.target` from `etcd-member.service` ([#1254](https://github.com/poseidon/typhoon/pull/1254))
### Cloud
* Remove defunct `delete-node.service` from worker node configurations ([#1256](https://github.com/poseidon/typhoon/pull/1256))
### Addons
* Update Prometheus from v2.39.1 to [v2.40.1](https://github.com/prometheus/prometheus/releases/tag/v2.40.1)
* Update Grafana from v9.1.7 to [v9.2.4](https://github.com/grafana/grafana/releases/tag/v9.2.4)
## v1.25.3
* Kubernetes [v1.25.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1253)
* Switch Kubernetes registry from `k8s.gcr.io` to `registry.k8s.io` for addons ([#1246](https://github.com/poseidon/typhoon/pull/1246))
* Update Cilium from v1.12.2 to [v1.12.3](https://github.com/cilium/cilium/releases/tag/v1.12.3) ([#1253](https://github.com/poseidon/typhoon/pull/1253))
### Azure
* Change default Azure `worker_type` from [`Standard_DS1_v2`](https://learn.microsoft.com/en-us/azure/virtual-machines/dv2-dsv2-series#dsv2-series) to [`Standard_D2as_v5`](https://learn.microsoft.com/en-us/azure/virtual-machines/dasv5-dadsv5-series#dasv5-series) ([#1248](https://github.com/poseidon/typhoon/pull/1248))
* Get 2 VCPU, 7 GiB, 12500Mbps (vs 1 VCPU, 3.5GiB, 750 Mbps)
* Small increase in pay-as-you-go price ($53.29 -> $62.78)
* Small increase in spot price ($5.64/mo -> $7.37/mo)
* Change from Intel to AMD EPYC (`D2as_v5` cheaper than `D2s_v5`)
### Flatcar Linux
* Add Flatcar Linux ARM64 support on Azure ([docs](https://typhoon.psdn.io/advanced/arm64/), [#1251](https://github.com/poseidon/typhoon/pull/1251))
* Switch from Azure Hypervisor gen1 to gen2 (**action required**) ([#1248](https://github.com/poseidon/typhoon/pull/1248))
* Run `az vm image terms accept --publish kinvolk --offer flatcar-container-linux-free --plan stable-gen2`
### Docs
* Remove old docs note about not supporting ARM64 with Calico
* Typhoon supports ARM64 with `cilium`, `calico`, and `flannel`
### Addons
* Update Prometheus from v2.38.0 to [v2.39.1](https://github.com/prometheus/prometheus/releases/tag/v2.39.1)
* Update Grafana from v9.1.6 to [v9.1.7](https://github.com/grafana/grafana/releases/tag/v9.1.7)
## v1.25.2
Kubernetes v1.25.2 was skipped since there were minimal changes upstream.
## v1.25.1
* Kubernetes [v1.25.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1251)
* Update etcd from v3.5.4 to [v3.5.5](https://github.com/etcd-io/etcd/releases/tag/v3.5.5)
* Update Cilium from v1.12.1 to [v1.12.2](https://github.com/cilium/cilium/releases/tag/v1.12.2)
* Update Calico from v3.23.3 to [v3.24.1](https://github.com/projectcalico/calico/releases/tag/v3.24.1)
* Revert Kubelet Graceful Node Shutdown on worker nodes ([#1227](https://github.com/poseidon/typhoon/pull/1227))
* Fix issue where non-critical pods are left in Error/Completed state on node shutdown
* Remove feature flag disable workaround for [kubernetes/kubernetes#112081](https://github.com/kubernetes/kubernetes/issues/112081)
* Kubernetes [reverted](https://github.com/kubernetes/kubernetes/pull/112078) `LocalStorageCapacityIsolationFSQuotaMonitoring` back to alpha
* Remove workaround for preventing `search .` propagation in [kubernetes/kubernetes#112135](https://github.com/kubernetes/kubernetes/issues/112135)
* Upstream Kubernetes [fix](https://github.com/kubernetes/kubernetes/pull/112157)
### Addons
* Update kube-state-metrics from v2.5.0 to [v2.6.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.6.0)
* Update ingress-nginx from v1.3.0 to [v1.3.1](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.3.1)
* Update Grafana from v9.1.0 to [v9.1.6](https://github.com/grafana/grafana/releases/tag/v9.1.6)
## v1.25.0
* Kubernetes [v1.25.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1250)

View File

@ -1,4 +1,4 @@
# Typhoon [![Release](https://img.shields.io/github/v/release/poseidon/typhoon)](https://github.com/poseidon/typhoon/releases) [![Stars](https://img.shields.io/github/stars/poseidon/typhoon)](https://github.com/poseidon/typhoon/stargazers) [![Sponsors](https://img.shields.io/github/sponsors/poseidon?logo=github)](https://github.com/sponsors/poseidon) [![Twitter](https://img.shields.io/badge/follow-news-1da1f2?logo=twitter)](https://twitter.com/typhoon8s)
# Typhoon [![Release](https://img.shields.io/github/v/release/poseidon/typhoon)](https://github.com/poseidon/typhoon/releases) [![Stars](https://img.shields.io/github/stars/poseidon/typhoon)](https://github.com/poseidon/typhoon/stargazers) [![Sponsors](https://img.shields.io/github/sponsors/poseidon?logo=github)](https://github.com/sponsors/poseidon) [![Mastodon](https://img.shields.io/badge/follow-news-6364ff?logo=mastodon)](https://fosstodon.org/@typhoon)
<img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
@ -13,7 +13,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.25.0 (upstream)
* Kubernetes v1.26.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
@ -50,6 +50,7 @@ Typhoon is available for [Flatcar Linux](https://www.flatcar-linux.org/releases/
| Platform | Operating System | Terraform Module | Status |
|---------------|------------------|------------------|--------|
| AWS | Flatcar Linux (ARM64) | [aws/flatcar-linux/kubernetes](aws/flatcar-linux/kubernetes) | alpha |
| Azure | Flatcar Linux (ARM64) | [azure/flatcar-linux/kubernetes](azure/flatcar-linux/kubernetes) | alpha |
## Documentation
@ -64,7 +65,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.2"
# Google Cloud
cluster_name = "yavin"
@ -103,9 +104,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.25.0
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.25.0
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.25.0
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.26.2
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.26.2
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.26.2
```
List the pods.
@ -158,5 +159,12 @@ Poseidon's Github [Sponsors](https://github.com/sponsors/poseidon) support the i
<img src="https://opensource.nyc3.cdn.digitaloceanspaces.com/attribution/assets/SVG/DO_Logo_horizontal_blue.svg" width="201px">
</a>
<br>
<br>
<a href="https://deploy.equinix.com/">
<img src="https://storage.googleapis.com/poseidon/equinix.png" width="201px">
</a>
<br>
<br>
If you'd like your company here, please contact dghubble at psdn.io.

View File

@ -24,7 +24,7 @@ spec:
type: RuntimeDefault
containers:
- name: grafana
image: docker.io/grafana/grafana:9.1.0
image: docker.io/grafana/grafana:9.3.1
env:
- name: GF_PATHS_CONFIG
value: "/etc/grafana/custom.ini"

View File

@ -23,7 +23,7 @@ spec:
type: RuntimeDefault
containers:
- name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
args:
- /nginx-ingress-controller
- --controller-class=k8s.io/public

View File

@ -23,7 +23,7 @@ spec:
type: RuntimeDefault
containers:
- name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
args:
- /nginx-ingress-controller
- --controller-class=k8s.io/public

View File

@ -23,7 +23,7 @@ spec:
type: RuntimeDefault
containers:
- name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
args:
- /nginx-ingress-controller
- --controller-class=k8s.io/public

View File

@ -23,7 +23,7 @@ spec:
type: RuntimeDefault
containers:
- name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
args:
- /nginx-ingress-controller
- --controller-class=k8s.io/public

View File

@ -23,7 +23,7 @@ spec:
type: RuntimeDefault
containers:
- name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
args:
- /nginx-ingress-controller
- --controller-class=k8s.io/public

View File

@ -21,7 +21,7 @@ spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: quay.io/prometheus/prometheus:v2.38.0
image: quay.io/prometheus/prometheus:v2.40.5
args:
- --web.listen-address=0.0.0.0:9090
- --config.file=/etc/prometheus/prometheus.yaml

View File

@ -25,7 +25,7 @@ spec:
serviceAccountName: kube-state-metrics
containers:
- name: kube-state-metrics
image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.5.0
image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.7.0
ports:
- name: metrics
containerPort: 8080

View File

@ -30,7 +30,7 @@ spec:
hostPID: true
containers:
- name: node-exporter
image: quay.io/prometheus/node-exporter:v1.3.1
image: quay.io/prometheus/node-exporter:v1.5.0
args:
- --path.procfs=/host/proc
- --path.sysfs=/host/sys

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.25.0 (upstream)
* Kubernetes v1.26.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=3fa08c542c75d5e62ebab2bd28e0a7392f3ffb46"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=607a05692bb213cb2e50e15d854f68b1e11c0492"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -9,10 +9,10 @@ systemd:
[Unit]
Description=etcd (System Container)
Documentation=https://github.com/etcd-io/etcd
Wants=network-online.target network.target
Wants=network-online.target
After=network-online.target
[Service]
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
Type=exec
ExecStartPre=/bin/mkdir -p /var/lib/etcd
ExecStartPre=-/usr/bin/podman rm etcd
@ -57,7 +57,7 @@ systemd:
After=afterburn.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -116,7 +116,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \
quay.io/poseidon/kubelet:v1.25.0
quay.io/poseidon/kubelet:v1.26.2
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap
storage:
@ -151,8 +151,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -31,6 +31,7 @@ resource "aws_instance" "controllers" {
volume_size = var.disk_size
iops = var.disk_iops
encrypted = true
tags = {}
}
# network

View File

@ -29,7 +29,7 @@ systemd:
After=afterburn.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -77,19 +77,6 @@ systemd:
RestartSec=10
[Install]
WantedBy=multi-user.target
- name: delete-node.service
enabled: true
contents: |
[Unit]
Description=Delete Kubernetes node on shutdown
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
[Install]
WantedBy=multi-user.target
storage:
directories:
- path: /etc/kubernetes
@ -119,8 +106,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -13,7 +13,10 @@ resource "aws_autoscaling_group" "workers" {
vpc_zone_identifier = var.subnet_ids
# template
launch_configuration = aws_launch_configuration.worker.name
launch_template {
id = aws_launch_template.worker.id
version = aws_launch_template.worker.latest_version
}
# target groups to which instances should be added
target_group_arns = flatten([
@ -49,25 +52,42 @@ resource "aws_autoscaling_group" "workers" {
}
# Worker template
resource "aws_launch_configuration" "worker" {
name_prefix = "${var.name}-worker"
image_id = local.ami_id
instance_type = var.instance_type
spot_price = var.spot_price > 0 ? var.spot_price : null
enable_monitoring = false
resource "aws_launch_template" "worker" {
name_prefix = "${var.name}-worker"
image_id = local.ami_id
instance_type = var.instance_type
monitoring {
enabled = false
}
user_data = data.ct_config.worker.rendered
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
# storage
root_block_device {
volume_type = var.disk_type
volume_size = var.disk_size
iops = var.disk_iops
encrypted = true
ebs_optimized = true
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_type = var.disk_type
volume_size = var.disk_size
iops = var.disk_iops
encrypted = true
delete_on_termination = true
}
}
# network
security_groups = var.security_groups
vpc_security_group_ids = var.security_groups
# spot
dynamic "instance_market_options" {
for_each = var.spot_price > 0 ? [1] : []
content {
market_type = "spot"
spot_options {
max_price = var.spot_price
}
}
}
lifecycle {
// Override the default destroy and replace update behavior

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.25.0 (upstream)
* Kubernetes v1.26.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=3fa08c542c75d5e62ebab2bd28e0a7392f3ffb46"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=607a05692bb213cb2e50e15d854f68b1e11c0492"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -11,7 +11,7 @@ systemd:
Requires=docker.service
After=docker.service
[Service]
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
ExecStartPre=/usr/bin/docker run -d \
--name etcd \
--network host \
@ -58,7 +58,7 @@ systemd:
After=coreos-metadata.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -109,7 +109,7 @@ systemd:
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \
@ -150,8 +150,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -32,6 +32,7 @@ resource "aws_instance" "controllers" {
volume_size = var.disk_size
iops = var.disk_iops
encrypted = true
tags = {}
}
# network

View File

@ -30,7 +30,7 @@ systemd:
After=coreos-metadata.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -78,19 +78,6 @@ systemd:
RestartSec=5
[Install]
WantedBy=multi-user.target
- name: delete-node.service
enabled: true
contents: |
[Unit]
Description=Delete Kubernetes node on shutdown
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
[Install]
WantedBy=multi-user.target
storage:
files:
- path: /etc/kubernetes/kubeconfig
@ -118,8 +105,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -13,7 +13,10 @@ resource "aws_autoscaling_group" "workers" {
vpc_zone_identifier = var.subnet_ids
# template
launch_configuration = aws_launch_configuration.worker.name
launch_template {
id = aws_launch_template.worker.id
version = aws_launch_template.worker.latest_version
}
# target groups to which instances should be added
target_group_arns = flatten([
@ -49,25 +52,42 @@ resource "aws_autoscaling_group" "workers" {
}
# Worker template
resource "aws_launch_configuration" "worker" {
name_prefix = "${var.name}-worker"
image_id = local.ami_id
instance_type = var.instance_type
spot_price = var.spot_price > 0 ? var.spot_price : null
enable_monitoring = false
resource "aws_launch_template" "worker" {
name_prefix = "${var.name}-worker"
image_id = local.ami_id
instance_type = var.instance_type
monitoring {
enabled = false
}
user_data = data.ct_config.worker.rendered
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
# storage
root_block_device {
volume_type = var.disk_type
volume_size = var.disk_size
iops = var.disk_iops
encrypted = true
ebs_optimized = true
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_type = var.disk_type
volume_size = var.disk_size
iops = var.disk_iops
encrypted = true
delete_on_termination = true
}
}
# network
security_groups = var.security_groups
vpc_security_group_ids = var.security_groups
# spot
dynamic "instance_market_options" {
for_each = var.spot_price > 0 ? [1] : []
content {
market_type = "spot"
spot_options {
max_price = var.spot_price
}
}
}
lifecycle {
// Override the default destroy and replace update behavior

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.25.0 (upstream)
* Kubernetes v1.26.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=3fa08c542c75d5e62ebab2bd28e0a7392f3ffb46"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=607a05692bb213cb2e50e15d854f68b1e11c0492"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -9,10 +9,10 @@ systemd:
[Unit]
Description=etcd (System Container)
Documentation=https://github.com/etcd-io/etcd
Wants=network-online.target network.target
Wants=network-online.target
After=network-online.target
[Service]
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
Type=exec
ExecStartPre=/bin/mkdir -p /var/lib/etcd
ExecStartPre=-/usr/bin/podman rm etcd
@ -54,7 +54,7 @@ systemd:
Description=Kubelet (System Container)
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -111,7 +111,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \
quay.io/poseidon/kubelet:v1.25.0
quay.io/poseidon/kubelet:v1.26.2
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap
storage:
@ -146,8 +146,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -43,7 +43,7 @@ variable "controller_type" {
variable "worker_type" {
type = string
description = "Machine type for workers (see `az vm list-skus --location centralus`)"
default = "Standard_DS1_v2"
default = "Standard_D2as_v5"
}
variable "os_image" {

View File

@ -26,7 +26,7 @@ systemd:
Description=Kubelet (System Container)
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -72,19 +72,6 @@ systemd:
RestartSec=10
[Install]
WantedBy=multi-user.target
- name: delete-node.service
enabled: true
contents: |
[Unit]
Description=Delete Kubernetes node on shutdown
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
[Install]
WantedBy=multi-user.target
storage:
directories:
- path: /etc/kubernetes
@ -114,8 +101,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -41,7 +41,7 @@ variable "worker_count" {
variable "vm_type" {
type = string
description = "Machine type for instances (see `az vm list-skus --location centralus`)"
default = "Standard_DS1_v2"
default = "Standard_D2as_v5"
}
variable "os_image" {

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.25.0 (upstream)
* Kubernetes v1.26.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=3fa08c542c75d5e62ebab2bd28e0a7392f3ffb46"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=607a05692bb213cb2e50e15d854f68b1e11c0492"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -11,7 +11,7 @@ systemd:
Requires=docker.service
After=docker.service
[Service]
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
ExecStartPre=/usr/bin/docker run -d \
--name etcd \
--network host \
@ -56,7 +56,7 @@ systemd:
After=docker.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -105,7 +105,7 @@ systemd:
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \
@ -146,8 +146,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -17,7 +17,9 @@ resource "azurerm_dns_a_record" "etcds" {
locals {
# Container Linux derivative
# flatcar-stable -> Flatcar Linux Stable
channel = split("-", var.os_image)[1]
channel = split("-", var.os_image)[1]
offer_suffix = var.arch == "arm64" ? "corevm" : "free"
urn = var.arch == "arm64" ? local.channel : "${local.channel}-gen2"
}
# Controller availability set to spread controllers
@ -53,16 +55,19 @@ resource "azurerm_linux_virtual_machine" "controllers" {
# Flatcar Container Linux
source_image_reference {
publisher = "Kinvolk"
offer = "flatcar-container-linux-free"
sku = local.channel
publisher = "kinvolk"
offer = "flatcar-container-linux-${local.offer_suffix}"
sku = local.urn
version = "latest"
}
plan {
name = local.channel
publisher = "kinvolk"
product = "flatcar-container-linux-free"
dynamic "plan" {
for_each = var.arch == "arm64" ? [] : [1]
content {
publisher = "kinvolk"
product = "flatcar-container-linux-${local.offer_suffix}"
name = local.urn
}
}
# network

View File

@ -43,7 +43,7 @@ variable "controller_type" {
variable "worker_type" {
type = string
description = "Machine type for workers (see `az vm list-skus --location centralus`)"
default = "Standard_DS1_v2"
default = "Standard_D2as_v5"
}
variable "os_image" {
@ -133,12 +133,15 @@ variable "worker_node_labels" {
default = []
}
# unofficial, undocumented, unsupported
variable "cluster_domain_suffix" {
variable "arch" {
type = string
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
default = "cluster.local"
description = "Container architecture (amd64 or arm64)"
default = "amd64"
validation {
condition = var.arch == "amd64" || var.arch == "arm64"
error_message = "The arch must be amd64 or arm64."
}
}
variable "daemonset_tolerations" {
@ -146,3 +149,11 @@ variable "daemonset_tolerations" {
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
default = []
}
# unofficial, undocumented, unsupported
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
default = "cluster.local"
}

View File

@ -21,4 +21,5 @@ module "workers" {
cluster_domain_suffix = var.cluster_domain_suffix
snippets = var.worker_snippets
node_labels = var.worker_node_labels
arch = var.arch
}

View File

@ -28,7 +28,7 @@ systemd:
After=docker.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -74,19 +74,6 @@ systemd:
RestartSec=5
[Install]
WantedBy=multi-user.target
- name: delete-node.service
enabled: true
contents: |
[Unit]
Description=Delete Kubernetes node on shutdown
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
[Install]
WantedBy=multi-user.target
storage:
files:
- path: /etc/kubernetes/kubeconfig
@ -114,8 +101,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -41,7 +41,7 @@ variable "worker_count" {
variable "vm_type" {
type = string
description = "Machine type for instances (see `az vm list-skus --location centralus`)"
default = "Standard_DS1_v2"
default = "Standard_D2as_v5"
}
variable "os_image" {
@ -100,6 +100,17 @@ variable "node_taints" {
default = []
}
variable "arch" {
type = string
description = "Container architecture (amd64 or arm64)"
default = "amd64"
validation {
condition = var.arch == "amd64" || var.arch == "arm64"
error_message = "The arch must be amd64 or arm64."
}
}
# unofficial, undocumented, unsupported
variable "cluster_domain_suffix" {

View File

@ -1,6 +1,8 @@
locals {
# flatcar-stable -> Flatcar Linux Stable
channel = split("-", var.os_image)[1]
channel = split("-", var.os_image)[1]
offer_suffix = var.arch == "arm64" ? "corevm" : "free"
urn = var.arch == "arm64" ? local.channel : "${local.channel}-gen2"
}
# Workers scale set
@ -24,16 +26,19 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
# Flatcar Container Linux
source_image_reference {
publisher = "Kinvolk"
offer = "flatcar-container-linux-free"
sku = local.channel
publisher = "kinvolk"
offer = "flatcar-container-linux-${local.offer_suffix}"
sku = local.urn
version = "latest"
}
plan {
name = local.channel
publisher = "kinvolk"
product = "flatcar-container-linux-free"
dynamic "plan" {
for_each = var.arch == "arm64" ? [] : [1]
content {
publisher = "kinvolk"
product = "flatcar-container-linux-${local.offer_suffix}"
name = local.urn
}
}
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.25.0 (upstream)
* Kubernetes v1.26.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=3fa08c542c75d5e62ebab2bd28e0a7392f3ffb46"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=607a05692bb213cb2e50e15d854f68b1e11c0492"
cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name]

View File

@ -9,10 +9,10 @@ systemd:
[Unit]
Description=etcd (System Container)
Documentation=https://github.com/etcd-io/etcd
Wants=network-online.target network.target
Wants=network-online.target
After=network-online.target
[Service]
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
Type=exec
ExecStartPre=/bin/mkdir -p /var/lib/etcd
ExecStartPre=-/usr/bin/podman rm etcd
@ -53,7 +53,7 @@ systemd:
Description=Kubelet (System Container)
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -113,7 +113,7 @@ systemd:
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=-/usr/bin/podman rm bootstrap
ExecStart=/usr/bin/podman run --name bootstrap \
--network host \
@ -124,21 +124,6 @@ systemd:
$${KUBELET_IMAGE}
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap
- name: fix-resolv-conf-search.service
enabled: true
contents: |
[Unit]
Description=Remove search . from /etc/resolv.conf
DefaultDependencies=no
Requires=systemd-resolved.service
After=systemd-resolved.service
BindsTo=systemd-resolved.service
[Service]
Type=oneshot
ExecStartPre=/usr/bin/sleep 5
ExecStart=/usr/bin/sed -i -e "s/^search .$//" /run/systemd/resolve/resolv.conf
[Install]
WantedBy=multi-user.target
storage:
directories:
- path: /var/lib/etcd
@ -171,8 +156,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -1,22 +0,0 @@
# Match each controller or worker to a profile
resource "matchbox_group" "controller" {
count = length(var.controllers)
name = format("%s-%s", var.cluster_name, var.controllers.*.name[count.index])
profile = matchbox_profile.controllers.*.name[count.index]
selector = {
mac = var.controllers.*.mac[count.index]
}
}
resource "matchbox_group" "worker" {
count = length(var.workers)
name = format("%s-%s", var.cluster_name, var.workers.*.name[count.index])
profile = matchbox_profile.workers.*.name[count.index]
selector = {
mac = var.workers.*.mac[count.index]
}
}

View File

@ -3,6 +3,13 @@ output "kubeconfig-admin" {
sensitive = true
}
# Outputs for workers
output "kubeconfig" {
value = module.bootstrap.kubeconfig-kubelet
sensitive = true
}
# Outputs for debug
output "assets_dist" {

View File

@ -28,6 +28,16 @@ locals {
args = var.cached_install ? local.cached_args : local.remote_args
}
# Match a controller to a profile by MAC
resource "matchbox_group" "controller" {
count = length(var.controllers)
name = format("%s-%s", var.cluster_name, var.controllers.*.name[count.index])
profile = matchbox_profile.controllers.*.name[count.index]
selector = {
mac = var.controllers.*.mac[count.index]
}
}
// Fedora CoreOS controller profile
resource "matchbox_profile" "controllers" {
@ -55,30 +65,3 @@ data "ct_config" "controllers" {
strict = true
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
}
// Fedora CoreOS worker profile
resource "matchbox_profile" "workers" {
count = length(var.workers)
name = format("%s-worker-%s", var.cluster_name, var.workers.*.name[count.index])
kernel = local.kernel
initrd = local.initrd
args = concat(local.args, var.kernel_args)
raw_ignition = data.ct_config.workers.*.rendered[count.index]
}
# Fedora CoreOS workers
data "ct_config" "workers" {
count = length(var.workers)
content = templatefile("${path.module}/butane/worker.yaml", {
domain_name = var.workers.*.domain[count.index]
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix
ssh_authorized_key = var.ssh_authorized_key
node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], []))
node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], []))
})
strict = true
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
}

View File

@ -15,7 +15,6 @@ resource "null_resource" "copy-controller-secrets" {
# matchbox groups are written, causing a deadlock.
depends_on = [
matchbox_group.controller,
matchbox_group.worker,
module.bootstrap,
]
@ -45,37 +44,6 @@ resource "null_resource" "copy-controller-secrets" {
}
}
# Secure copy kubeconfig to all workers. Activates kubelet.service
resource "null_resource" "copy-worker-secrets" {
count = length(var.workers)
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
depends_on = [
matchbox_group.controller,
matchbox_group.worker,
]
connection {
type = "ssh"
host = var.workers.*.domain[count.index]
user = "core"
timeout = "60m"
}
provisioner "file" {
content = module.bootstrap.kubeconfig-kubelet
destination = "/home/core/kubeconfig"
}
provisioner "remote-exec" {
inline = [
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
"sudo touch /etc/kubernetes",
]
}
}
# Connect to a controller to perform one-time cluster bootstrap.
resource "null_resource" "bootstrap" {
# Without depends_on, this remote-exec may start before the kubeconfig copy.
@ -83,7 +51,6 @@ resource "null_resource" "bootstrap" {
# while no Kubelets are running.
depends_on = [
null_resource.copy-controller-secrets,
null_resource.copy-worker-secrets,
]
connection {

View File

@ -53,6 +53,7 @@ List of worker machine details (unique name, identifying MAC address, FQDN)
{ name = "node3", mac = "52:54:00:c3:61:77", domain = "node3.example.com"}
]
EOD
default = []
}
variable "snippets" {

View File

@ -25,7 +25,7 @@ systemd:
Description=Kubelet (System Container)
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -81,21 +81,6 @@ systemd:
PathExists=/etc/kubernetes/kubeconfig
[Install]
WantedBy=multi-user.target
- name: fix-resolv-conf-search.service
enabled: true
contents: |
[Unit]
Description=Remove search . from /etc/resolv.conf
DefaultDependencies=no
Requires=systemd-resolved.service
After=systemd-resolved.service
BindsTo=systemd-resolved.service
[Service]
Type=oneshot
ExecStartPre=/usr/bin/sleep 5
ExecStart=/usr/bin/sed -i -e "s/^search .$//" /run/systemd/resolve/resolv.conf
[Install]
WantedBy=multi-user.target
storage:
directories:
- path: /etc/kubernetes
@ -125,8 +110,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -0,0 +1,63 @@
locals {
remote_kernel = "https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-kernel-x86_64"
remote_initrd = [
"--name main https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
]
remote_args = [
"initrd=main",
"coreos.live.rootfs_url=https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
"coreos.inst.install_dev=${var.install_disk}",
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
]
cached_kernel = "/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-kernel-x86_64"
cached_initrd = [
"/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
]
cached_args = [
"initrd=main",
"coreos.live.rootfs_url=${var.matchbox_http_endpoint}/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
"coreos.inst.install_dev=${var.install_disk}",
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
]
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
initrd = var.cached_install ? local.cached_initrd : local.remote_initrd
args = var.cached_install ? local.cached_args : local.remote_args
}
// Match a worker to a profile by MAC
resource "matchbox_group" "worker" {
name = format("%s-%s", var.cluster_name, var.name)
profile = matchbox_profile.worker.name
selector = {
mac = var.mac
}
}
// Fedora CoreOS worker profile
resource "matchbox_profile" "worker" {
name = format("%s-worker-%s", var.cluster_name, var.name)
kernel = local.kernel
initrd = local.initrd
args = concat(local.args, var.kernel_args)
raw_ignition = data.ct_config.worker.rendered
}
# Fedora CoreOS workers
data "ct_config" "worker" {
content = templatefile("${path.module}/butane/worker.yaml", {
domain_name = var.domain
ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints)
})
strict = true
snippets = var.snippets
}

View File

@ -0,0 +1,27 @@
# Secure copy kubeconfig to worker. Activates kubelet.service
resource "null_resource" "copy-worker-secrets" {
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
depends_on = [
matchbox_group.worker,
]
connection {
type = "ssh"
host = var.domain
user = "core"
timeout = "60m"
}
provisioner "file" {
content = var.kubeconfig
destination = "/home/core/kubeconfig"
}
provisioner "remote-exec" {
inline = [
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
"sudo touch /etc/kubernetes",
]
}
}

View File

@ -0,0 +1,111 @@
variable "cluster_name" {
type = string
description = "Must be set to the `cluster_name` of cluster"
}
# bare-metal
variable "matchbox_http_endpoint" {
type = string
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
}
variable "os_stream" {
type = string
description = "Fedora CoreOS release stream (e.g. stable, testing, next)"
default = "stable"
validation {
condition = contains(["stable", "testing", "next"], var.os_stream)
error_message = "The os_stream must be stable, testing, or next."
}
}
variable "os_version" {
type = string
description = "Fedora CoreOS version to PXE and install (e.g. 31.20200310.3.0)"
}
# machine
variable "name" {
type = string
description = "Unique name for the machine (e.g. node1)"
}
variable "mac" {
type = string
description = "MAC address (e.g. 52:54:00:a1:9c:ae)"
}
variable "domain" {
type = string
description = "Fully qualified domain name (e.g. node1.example.com)"
}
# configuration
variable "kubeconfig" {
type = string
description = "Must be set to `kubeconfig` output by cluster"
}
variable "ssh_authorized_key" {
type = string
description = "SSH public key for user 'core'"
}
variable "snippets" {
type = list(string)
description = "List of Butane snippets"
default = []
}
variable "node_labels" {
type = list(string)
description = "List of initial node labels"
default = []
}
variable "node_taints" {
type = list(string)
description = "List of initial node taints"
default = []
}
# optional
variable "cached_install" {
type = bool
description = "Whether Fedora CoreOS should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the os_version into matchbox assets."
default = false
}
variable "install_disk" {
type = string
description = "Disk device to install Fedora CoreOS (e.g. sda)"
default = "sda"
}
variable "kernel_args" {
type = list(string)
description = "Additional kernel arguments to provide at PXE boot."
default = []
}
# unofficial, undocumented, unsupported
variable "service_cidr" {
type = string
description = <<EOD
CIDR IPv4 range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
EOD
default = "10.3.0.0/16"
}
variable "cluster_domain_suffix" {
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
type = string
default = "cluster.local"
}

View File

@ -0,0 +1,17 @@
# Terraform version and plugin versions
terraform {
required_version = ">= 0.13.0, < 2.0.0"
required_providers {
null = ">= 2.1"
ct = {
source = "poseidon/ct"
version = "~> 0.9"
}
matchbox = {
source = "poseidon/matchbox"
version = "~> 0.5.0"
}
}
}

View File

@ -0,0 +1,30 @@
module "workers" {
count = length(var.workers)
source = "./worker"
cluster_name = var.cluster_name
# metal
matchbox_http_endpoint = var.matchbox_http_endpoint
os_stream = var.os_stream
os_version = var.os_version
# machine
name = var.workers[count.index].name
mac = var.workers[count.index].mac
domain = var.workers[count.index].domain
# configuration
kubeconfig = module.bootstrap.kubeconfig-kubelet
ssh_authorized_key = var.ssh_authorized_key
service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, [])
node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, [])
snippets = lookup(var.snippets, var.workers[count.index].name, [])
# optional
cached_install = var.cached_install
install_disk = var.install_disk
kernel_args = var.kernel_args
}

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.25.0 (upstream)
* Kubernetes v1.26.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=3fa08c542c75d5e62ebab2bd28e0a7392f3ffb46"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=607a05692bb213cb2e50e15d854f68b1e11c0492"
cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name]

View File

@ -11,7 +11,7 @@ systemd:
Requires=docker.service
After=docker.service
[Service]
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
ExecStartPre=/usr/bin/docker run -d \
--name etcd \
--network host \
@ -64,7 +64,7 @@ systemd:
After=docker.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -114,7 +114,7 @@ systemd:
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \
@ -157,8 +157,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -1,35 +0,0 @@
resource "matchbox_group" "install" {
count = length(var.controllers) + length(var.workers)
name = format("install-%s", concat(var.controllers.*.name, var.workers.*.name)[count.index])
# pick Matchbox profile (Flatcar upstream or Matchbox image cache)
profile = var.cached_install ? matchbox_profile.cached-flatcar-install.*.name[count.index] : matchbox_profile.flatcar-install.*.name[count.index]
selector = {
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
}
}
resource "matchbox_group" "controller" {
count = length(var.controllers)
name = format("%s-%s", var.cluster_name, var.controllers[count.index].name)
profile = matchbox_profile.controllers.*.name[count.index]
selector = {
mac = var.controllers[count.index].mac
os = "installed"
}
}
resource "matchbox_group" "worker" {
count = length(var.workers)
name = format("%s-%s", var.cluster_name, var.workers[count.index].name)
profile = matchbox_profile.workers.*.name[count.index]
selector = {
mac = var.workers[count.index].mac
os = "installed"
}
}

View File

@ -1,54 +1,53 @@
locals {
# flatcar-stable -> stable channel
channel = split("-", var.os_channel)[1]
}
// Flatcar Linux install profile (from release.flatcar-linux.net)
resource "matchbox_profile" "flatcar-install" {
count = length(var.controllers) + length(var.workers)
name = format("%s-flatcar-install-%s", var.cluster_name, concat(var.controllers.*.name, var.workers.*.name)[count.index])
kernel = "${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
initrd = [
remote_kernel = "${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
remote_initrd = [
"${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
]
args = flatten([
args = [
"initrd=flatcar_production_pxe_image.cpio.gz",
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"flatcar.first_boot=yes",
var.kernel_args,
])
]
raw_ignition = data.ct_config.install.*.rendered[count.index]
}
// Flatcar Linux Install profile (from matchbox /assets cache)
// Note: Admin must have downloaded os_version into matchbox assets/flatcar.
resource "matchbox_profile" "cached-flatcar-install" {
count = length(var.controllers) + length(var.workers)
name = format("%s-cached-flatcar-linux-install-%s", var.cluster_name, concat(var.controllers.*.name, var.workers.*.name)[count.index])
kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
initrd = [
cached_kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
cached_initrd = [
"/assets/flatcar/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
]
args = flatten([
"initrd=flatcar_production_pxe_image.cpio.gz",
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"flatcar.first_boot=yes",
var.kernel_args,
])
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
initrd = var.cached_install ? local.cached_initrd : local.remote_initrd
}
raw_ignition = data.ct_config.cached-install.*.rendered[count.index]
# Match controllers to install profiles by MAC
resource "matchbox_group" "install" {
count = length(var.controllers)
name = format("install-%s", var.controllers[count.index].name)
profile = matchbox_profile.install[count.index].name
selector = {
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
}
}
// Flatcar Linux install
resource "matchbox_profile" "install" {
count = length(var.controllers)
name = format("%s-install-%s", var.cluster_name, var.controllers.*.name[count.index])
kernel = local.kernel
initrd = local.initrd
args = concat(local.args, var.kernel_args)
raw_ignition = data.ct_config.install[count.index].rendered
}
# Flatcar Linux install
data "ct_config" "install" {
count = length(var.controllers) + length(var.workers)
count = length(var.controllers)
content = templatefile("${path.module}/butane/install.yaml", {
os_channel = local.channel
os_version = var.os_version
@ -57,25 +56,20 @@ data "ct_config" "install" {
install_disk = var.install_disk
ssh_authorized_key = var.ssh_authorized_key
# only cached profile adds -b baseurl
baseurl_flag = ""
baseurl_flag = var.cached_install ? "-b ${var.matchbox_http_endpoint}/assets/flatcar" : ""
})
strict = true
}
# Flatcar Linux cached install
data "ct_config" "cached-install" {
count = length(var.controllers) + length(var.workers)
content = templatefile("${path.module}/butane/install.yaml", {
os_channel = local.channel
os_version = var.os_version
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
install_disk = var.install_disk
ssh_authorized_key = var.ssh_authorized_key
# profile uses -b baseurl to install from matchbox cache
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/flatcar"
})
strict = true
# Match each controller by MAC
resource "matchbox_group" "controller" {
count = length(var.controllers)
name = format("%s-%s", var.cluster_name, var.controllers[count.index].name)
profile = matchbox_profile.controllers[count.index].name
selector = {
mac = var.controllers[count.index].mac
os = "installed"
}
}
// Kubernetes Controller profiles
@ -99,25 +93,3 @@ data "ct_config" "controllers" {
strict = true
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
}
// Kubernetes Worker profiles
resource "matchbox_profile" "workers" {
count = length(var.workers)
name = format("%s-worker-%s", var.cluster_name, var.workers.*.name[count.index])
raw_ignition = data.ct_config.workers.*.rendered[count.index]
}
# Flatcar Linux workers
data "ct_config" "workers" {
count = length(var.workers)
content = templatefile("${path.module}/butane/worker.yaml", {
domain_name = var.workers.*.domain[count.index]
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix
ssh_authorized_key = var.ssh_authorized_key
node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], []))
node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], []))
})
strict = true
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
}

View File

@ -16,7 +16,6 @@ resource "null_resource" "copy-controller-secrets" {
depends_on = [
matchbox_group.install,
matchbox_group.controller,
matchbox_group.worker,
module.bootstrap,
]
@ -45,37 +44,6 @@ resource "null_resource" "copy-controller-secrets" {
}
}
# Secure copy kubeconfig to all workers. Activates kubelet.service
resource "null_resource" "copy-worker-secrets" {
count = length(var.workers)
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
depends_on = [
matchbox_group.install,
matchbox_group.controller,
matchbox_group.worker,
]
connection {
type = "ssh"
host = var.workers.*.domain[count.index]
user = "core"
timeout = "60m"
}
provisioner "file" {
content = module.bootstrap.kubeconfig-kubelet
destination = "/home/core/kubeconfig"
}
provisioner "remote-exec" {
inline = [
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
]
}
}
# Connect to a controller to perform one-time cluster bootstrap.
resource "null_resource" "bootstrap" {
# Without depends_on, this remote-exec may start before the kubeconfig copy.
@ -83,7 +51,6 @@ resource "null_resource" "bootstrap" {
# while no Kubelets are running.
depends_on = [
null_resource.copy-controller-secrets,
null_resource.copy-worker-secrets,
]
connection {
@ -99,4 +66,3 @@ resource "null_resource" "bootstrap" {
]
}
}

View File

@ -52,6 +52,7 @@ List of worker machine details (unique name, identifying MAC address, FQDN)
{ name = "node3", mac = "52:54:00:c3:61:77", domain = "node3.example.com"}
]
EOD
default = []
}
variable "snippets" {
@ -72,6 +73,20 @@ variable "worker_node_taints" {
default = {}
}
variable "worker_bind_devices" {
type = map(list(object({
source = string
target = string
})))
description = <<EOD
List of devices to bind on kubelet container for direct storage usage
[
{ source = "/dev/sdb", target = "/dev/sdb" },
{ source = "/dev/sdc", target = "/dev/sdc" }
]
EOD
}
# configuration
variable "k8s_domain_name" {

View File

@ -0,0 +1,46 @@
variant: flatcar
version: 1.0.0
systemd:
units:
- name: installer.service
enabled: true
contents: |
[Unit]
Requires=network-online.target
After=network-online.target
[Service]
Type=simple
ExecStart=/opt/installer
[Install]
WantedBy=multi-user.target
# Avoid using the standard SSH port so terraform apply cannot SSH until
# post-install. But admins may SSH to debug disk install problems.
# After install, sshd will use port 22 and users/terraform can connect.
- name: sshd.socket
dropins:
- name: 10-sshd-port.conf
contents: |
[Socket]
ListenStream=
ListenStream=2222
storage:
files:
- path: /opt/installer
mode: 0500
contents:
inline: |
#!/bin/bash -ex
curl --retry 10 "${ignition_endpoint}?mac=${mac}&os=installed" -o ignition.json
flatcar-install \
-d ${install_disk} \
-C ${os_channel} \
-V ${os_version} \
${baseurl_flag} \
-i ignition.json
udevadm settle
systemctl reboot
passwd:
users:
- name: core
ssh_authorized_keys:
- "${ssh_authorized_key}"

View File

@ -36,7 +36,7 @@ systemd:
After=docker.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -77,6 +77,9 @@ systemd:
--register-with-taints=${taint} \
%{~ endfor ~}
--node-labels=node.kubernetes.io/node
%{~ for device in node_devices ~}
--device=${device.source}:${device.target}
%{~ endfor ~}
ExecStart=docker logs -f kubelet
ExecStop=docker stop kubelet
ExecStopPost=docker rm kubelet
@ -115,8 +118,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -0,0 +1,88 @@
locals {
# flatcar-stable -> stable channel
channel = split("-", var.os_channel)[1]
remote_kernel = "${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
remote_initrd = [
"${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
]
args = flatten([
"initrd=flatcar_production_pxe_image.cpio.gz",
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"flatcar.first_boot=yes",
var.kernel_args,
])
cached_kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
cached_initrd = [
"/assets/flatcar/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
]
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
initrd = var.cached_install ? local.cached_initrd : local.remote_initrd
}
# Match machine to an install profile by MAC
resource "matchbox_group" "install" {
name = format("install-%s", var.name)
profile = matchbox_profile.install.name
selector = {
mac = var.mac
}
}
// Flatcar Linux install profile (from release.flatcar-linux.net)
resource "matchbox_profile" "install" {
name = format("%s-install-%s", var.cluster_name, var.name)
kernel = local.kernel
initrd = local.initrd
args = concat(local.args, var.kernel_args)
raw_ignition = data.ct_config.install.rendered
}
# Flatcar Linux install
data "ct_config" "install" {
content = templatefile("${path.module}/butane/install.yaml", {
os_channel = local.channel
os_version = var.os_version
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
mac = var.mac
install_disk = var.install_disk
ssh_authorized_key = var.ssh_authorized_key
# only cached profile adds -b baseurl
baseurl_flag = var.cached_install ? "-b ${var.matchbox_http_endpoint}/assets/flatcar" : ""
})
strict = true
}
# Match a worker to a profile by MAC
resource "matchbox_group" "worker" {
name = format("%s-%s", var.cluster_name, var.name)
profile = matchbox_profile.worker.name
selector = {
mac = var.mac
os = "installed"
}
}
// Flatcar Linux Worker profile
resource "matchbox_profile" "worker" {
name = format("%s-worker-%s", var.cluster_name, var.name)
raw_ignition = data.ct_config.worker.rendered
}
# Flatcar Linux workers
data "ct_config" "worker" {
content = templatefile("${path.module}/butane/worker.yaml", {
domain_name = var.domain
ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints)
node_devices = var.node_devices
})
strict = true
snippets = var.snippets
}

View File

@ -0,0 +1,27 @@
# Secure copy kubeconfig to worker. Activates kubelet.service
resource "null_resource" "copy-worker-secrets" {
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
depends_on = [
matchbox_group.install,
matchbox_group.worker,
]
connection {
type = "ssh"
host = var.domain
user = "core"
timeout = "60m"
}
provisioner "file" {
content = var.kubeconfig
destination = "/home/core/kubeconfig"
}
provisioner "remote-exec" {
inline = [
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
]
}
}

View File

@ -0,0 +1,129 @@
variable "cluster_name" {
type = string
description = "Must be set to the `cluster_name` of cluster"
}
# bare-metal
variable "matchbox_http_endpoint" {
type = string
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
}
variable "os_channel" {
type = string
description = "Channel for a Flatcar Linux (flatcar-stable, flatcar-beta, flatcar-alpha)"
validation {
condition = contains(["flatcar-stable", "flatcar-beta", "flatcar-alpha"], var.os_channel)
error_message = "The os_channel must be flatcar-stable, flatcar-beta, or flatcar-alpha."
}
}
variable "os_version" {
type = string
description = "Version of Flatcar Linux to PXE and install (e.g. 2079.5.1)"
}
# machine
variable "name" {
type = string
description = "Unique name for the machine (e.g. node1)"
}
variable "mac" {
type = string
description = "MAC address (e.g. 52:54:00:a1:9c:ae)"
}
variable "domain" {
type = string
description = "Fully qualified domain name (e.g. node1.example.com)"
}
# configuration
variable "kubeconfig" {
type = string
description = "Must be set to `kubeconfig` output by cluster"
}
variable "ssh_authorized_key" {
type = string
description = "SSH public key for user 'core'"
}
variable "snippets" {
type = list(string)
description = "List of Butane snippets"
default = []
}
variable "node_labels" {
type = list(string)
description = "List of initial node labels"
default = []
}
variable "node_taints" {
type = list(string)
description = "List of initial node taints"
default = []
}
variable "node_devices" {
type = list(object({
source = string
target = string
}))
description = "List of devices object to bind with --device option"
default = []
}
# optional
variable "download_protocol" {
type = string
description = "Protocol iPXE should use to download the kernel and initrd. Defaults to https, which requires iPXE compiled with crypto support. Unused if cached_install is true."
default = "https"
}
variable "cached_install" {
type = bool
description = "Whether Flatcar Linux should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the os_version into matchbox assets."
default = false
}
variable "install_disk" {
type = string
default = "/dev/sda"
description = "Disk device to which the install profiles should install Flatcar Linux (e.g. /dev/sda)"
}
variable "kernel_args" {
type = list(string)
description = "Additional kernel arguments to provide at PXE boot."
default = []
}
# unofficial, undocumented, unsupported
variable "service_cidr" {
type = string
description = <<EOD
CIDR IPv4 range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
EOD
default = "10.3.0.0/16"
}
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
default = "cluster.local"
}

View File

@ -0,0 +1,16 @@
# Terraform version and plugin versions
terraform {
required_version = ">= 0.13.0, < 2.0.0"
required_providers {
null = ">= 2.1"
ct = {
source = "poseidon/ct"
version = "~> 0.9"
}
matchbox = {
source = "poseidon/matchbox"
version = "~> 0.5.0"
}
}
}

View File

@ -0,0 +1,33 @@
module "workers" {
count = length(var.workers)
source = "./worker"
cluster_name = var.cluster_name
# metal
matchbox_http_endpoint = var.matchbox_http_endpoint
os_channel = var.os_channel
os_version = var.os_version
# machine
name = var.workers[count.index].name
mac = var.workers[count.index].mac
domain = var.workers[count.index].domain
# configuration
kubeconfig = module.bootstrap.kubeconfig-kubelet
ssh_authorized_key = var.ssh_authorized_key
service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, [])
node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, [])
snippets = lookup(var.snippets, var.workers[count.index].name, [])
node_devices = lookup(var.worker_bind_devices, var.workers[count.index].name, [])
# optional
download_protocol = var.download_protocol
cached_install = var.cached_install
install_disk = var.install_disk
kernel_args = var.kernel_args
}

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.25.0 (upstream)
* Kubernetes v1.26.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=3fa08c542c75d5e62ebab2bd28e0a7392f3ffb46"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=607a05692bb213cb2e50e15d854f68b1e11c0492"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -9,10 +9,10 @@ systemd:
[Unit]
Description=etcd (System Container)
Documentation=https://github.com/etcd-io/etcd
Wants=network-online.target network.target
Wants=network-online.target
After=network-online.target
[Service]
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
Type=exec
ExecStartPre=/bin/mkdir -p /var/lib/etcd
ExecStartPre=-/usr/bin/podman rm etcd
@ -55,7 +55,7 @@ systemd:
After=afterburn.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -123,7 +123,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \
quay.io/poseidon/kubelet:v1.25.0
quay.io/poseidon/kubelet:v1.26.2
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap
storage:
@ -153,8 +153,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -28,7 +28,7 @@ systemd:
After=afterburn.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -82,19 +82,6 @@ systemd:
PathExists=/etc/kubernetes/kubeconfig
[Install]
WantedBy=multi-user.target
- name: delete-node.service
enabled: true
contents: |
[Unit]
Description=Delete Kubernetes node on shutdown
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
[Install]
WantedBy=multi-user.target
storage:
directories:
- path: /etc/kubernetes
@ -119,8 +106,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.25.0 (upstream)
* Kubernetes v1.26.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=3fa08c542c75d5e62ebab2bd28e0a7392f3ffb46"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=607a05692bb213cb2e50e15d854f68b1e11c0492"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -11,7 +11,7 @@ systemd:
Requires=docker.service
After=docker.service
[Service]
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
ExecStartPre=/usr/bin/docker run -d \
--name etcd \
--network host \
@ -66,7 +66,7 @@ systemd:
After=coreos-metadata.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -117,7 +117,7 @@ systemd:
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \
@ -155,8 +155,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -38,7 +38,7 @@ systemd:
After=coreos-metadata.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -80,19 +80,6 @@ systemd:
RestartSec=5
[Install]
WantedBy=multi-user.target
- name: delete-node.service
enabled: true
contents: |
[Unit]
Description=Delete Kubernetes node on shutdown
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
[Install]
WantedBy=multi-user.target
storage:
directories:
- path: /etc/kubernetes
@ -118,8 +105,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -1,19 +1,21 @@
# ARM64
Typhoon has experimental support for ARM64 on AWS, with Fedora CoreOS or Flatcar Linux. Clusters can be created with ARM64 controller and worker nodes. Or worker pools of ARM64 nodes can be attached to an AMD64 cluster to create a hybrid/mixed architecture cluster.
Typhoon supports ARM64 Kubernetes clusters with ARM64 controller and worker nodes (full-cluster) or adding worker pools of ARM64 nodes to clusters with an x86/amd64 control plane for a hybdrid (mixed-arch) cluster.
!!! note
Currently, CNI networking must be set to `flannel` or `cilium`.
Typhoon ARM64 clusters (full-cluster or mixed-arch) are available on:
* AWS with Fedora CoreOS or Flatcar Linux
* Azure with Flatcar Linux
## Cluster
Create a cluster with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` container images.
Create a cluster on AWS with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
=== "Fedora CoreOS Cluster (arm64)"
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.26.2"
# AWS
cluster_name = "gravitas"
@ -38,7 +40,7 @@ Create a cluster with ARM64 controller and worker nodes. Container workloads mus
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.26.2"
# AWS
cluster_name = "gravitas"
@ -64,9 +66,9 @@ Verify the cluster has only arm64 (`aarch64`) nodes. For Flatcar Linux, describe
```
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-21-119 Ready <none> 77s v1.25.0 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-32-166 Ready <none> 80s v1.25.0 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-5-79 Ready <none> 77s v1.25.0 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-21-119 Ready <none> 77s v1.26.2 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-32-166 Ready <none> 80s v1.26.2 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-5-79 Ready <none> 77s v1.26.2 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
```
## Hybrid
@ -77,7 +79,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.26.2"
# AWS
cluster_name = "gravitas"
@ -100,7 +102,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.26.2"
# AWS
cluster_name = "gravitas"
@ -123,7 +125,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf
module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.26.2"
# AWS
vpc_id = module.gravitas.vpc_id
@ -147,7 +149,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf
module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.26.2"
# AWS
vpc_id = module.gravitas.vpc_id
@ -172,9 +174,34 @@ Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
```
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-1-73 Ready <none> 111m v1.25.0 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-22-79... Ready <none> 111m v1.25.0 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
ip-10-0-24-130 Ready <none> 111m v1.25.0 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-39-19 Ready <none> 111m v1.25.0 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-1-73 Ready <none> 111m v1.26.2 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-22-79... Ready <none> 111m v1.26.2 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
ip-10-0-24-130 Ready <none> 111m v1.26.2 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-39-19 Ready <none> 111m v1.26.2 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
```
## Azure
Create a cluster on Azure with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
```tf
module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.26.2"
# Azure
cluster_name = "ramius"
region = "centralus"
dns_zone = "azure.example.com"
dns_zone_group = "example-group"
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
arch = "arm64"
controller_type = "Standard_D2pls_v5"
worker_type = "Standard_D2pls_v5"
worker_count = 2
host_cidr = "10.0.0.0/20"
}
```

View File

@ -36,7 +36,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.2"
# Google Cloud
cluster_name = "yavin"
@ -57,7 +57,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
```tf
module "yavin-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.26.2"
# Google Cloud
cluster_name = "yavin"
@ -89,7 +89,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.2"
# Google Cloud
cluster_name = "yavin"
@ -110,7 +110,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
```tf
module "yavin-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.26.2"
# Google Cloud
cluster_name = "yavin"

View File

@ -19,7 +19,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
```tf
module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.26.2"
# AWS
vpc_id = module.tempest.vpc_id
@ -42,7 +42,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
```tf
module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.26.2"
# AWS
vpc_id = module.tempest.vpc_id
@ -111,7 +111,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
```tf
module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.26.2"
# Azure
region = module.ramius.region
@ -137,7 +137,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
```tf
module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.26.2"
# Azure
region = module.ramius.region
@ -189,7 +189,7 @@ The Azure internal `workers` module supports a number of [variables](https://git
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| worker_count | Number of instances | 1 | 3 |
| vm_type | Machine type for instances | "Standard_DS1_v2" | See below |
| vm_type | Machine type for instances | "Standard_D2as_v5" | See below |
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
| priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | "Regular" | "Spot" |
| snippets | Container Linux Config snippets | [] | [examples](/advanced/customization/) |
@ -207,7 +207,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
```tf
module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.26.2"
# Google Cloud
region = "europe-west2"
@ -231,7 +231,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
```tf
module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.26.2"
# Google Cloud
region = "europe-west2"
@ -262,11 +262,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
```
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.25.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.25.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.25.0
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.25.0
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.25.0
yavin-controller-0.c.example-com.internal Ready 6m v1.26.2
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.26.2
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.26.2
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.26.2
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.26.2
```
### Variables

View File

@ -1,6 +1,6 @@
# AWS
In this tutorial, we'll create a Kubernetes v1.25.0 cluster on AWS with Fedora CoreOS.
In this tutorial, we'll create a Kubernetes v1.26.2 cluster on AWS with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -55,7 +55,7 @@ terraform {
}
aws = {
source = "hashicorp/aws"
version = "4.28.0"
version = "4.31.0"
}
}
}
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
```tf
module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.26.2"
# AWS
cluster_name = "tempest"
@ -145,9 +145,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.25.0
ip-10-0-26-65 Ready <none> 10m v1.25.0
ip-10-0-41-21 Ready <none> 10m v1.25.0
ip-10-0-3-155 Ready <none> 10m v1.26.2
ip-10-0-26-65 Ready <none> 10m v1.26.2
ip-10-0-41-21 Ready <none> 10m v1.26.2
```
List the pods.

View File

@ -1,6 +1,6 @@
# Azure
In this tutorial, we'll create a Kubernetes v1.25.0 cluster on Azure with Fedora CoreOS.
In this tutorial, we'll create a Kubernetes v1.26.2 cluster on Azure with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
@ -52,7 +52,7 @@ terraform {
}
azurerm = {
source = "hashicorp/azurerm"
version = "3.20.0"
version = "3.23.0"
}
}
}
@ -86,7 +86,7 @@ Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
```tf
module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.26.2"
# Azure
cluster_name = "ramius"
@ -161,9 +161,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.25.0
ramius-worker-000001 Ready <none> 25m v1.25.0
ramius-worker-000002 Ready <none> 24m v1.25.0
ramius-controller-0 Ready <none> 24m v1.26.2
ramius-worker-000001 Ready <none> 25m v1.26.2
ramius-worker-000002 Ready <none> 24m v1.26.2
```
List the pods.
@ -240,7 +240,7 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
| worker_type | Machine type for workers | "Standard_DS1_v2" | See below |
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
| disk_size | Size of the disk in GB | 30 | 100 |
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
| controller_snippets | Controller Butane snippets | [] | [example](/advanced/customization/#usage) |

View File

@ -1,6 +1,6 @@
# Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.25.0 cluster on bare-metal with Fedora CoreOS.
In this tutorial, we'll network boot and provision a Kubernetes v1.26.2 cluster on bare-metal with Fedora CoreOS.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
```tf
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.26.2"
# bare-metal
cluster_name = "mercury"
@ -187,6 +187,36 @@ module "mercury" {
}
```
Workers with similar features can be defined inline using the `workers` field as shown above. It's also possible to define discrete workers that attach to the cluster. Discrete workers are more advanced, but more verbose.
```tf
module "mercury-node1" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.26.2"
# bare-metal
cluster_name = "mercury"
matchbox_http_endpoint = "http://matchbox.example.com"
os_stream = "stable"
os_version = "32.20201104.3.0"
# configuration
name = "node2"
mac = "52:54:00:b2:2f:86"
domain = "node2.example.com"
kubeconfig = module.mercury.kubeconfig
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
snippets = []
node_labels = []
node_tains = []
install_disk = "/dev/vda"
cached_install = false
}
...
```
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-metal/fedora-coreos/kubernetes/variables.tf) source.
## ssh-agent
@ -283,9 +313,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.25.0
node2.example.com Ready <none> 10m v1.25.0
node3.example.com Ready <none> 10m v1.25.0
node1.example.com Ready <none> 10m v1.26.2
node2.example.com Ready <none> 10m v1.26.2
node3.example.com Ready <none> 10m v1.26.2
```
List the pods.
@ -325,12 +355,12 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-ed25519 AAAAB3Nz..." |
| controllers | List of controller machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node1", mac="52:54:00:a1:9c:ae", domain="node1.example.com"}]` |
| workers | List of worker machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node2", mac="52:54:00:b2:2f:86", domain="node2.example.com"}, {name="node3", mac="52:54:00:c3:61:77", domain="node3.example.com"}]` |
### Optional
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| workers | List of worker machine detail objects (unique name, identifying MAC address, FQDN) | [] | `[{name="node2", mac="52:54:00:b2:2f:86", domain="node2.example.com"}, {name="node3", mac="52:54:00:c3:61:77", domain="node3.example.com"}]` |
| cached_install | PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Fedora CoreOS images into the cache | false | true |
| install_disk | Disk device where Fedora CoreOS should be installed | "sda" (not "/dev/sda" like Container Linux) | "sdb" |
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |

View File

@ -1,6 +1,6 @@
# DigitalOcean
In this tutorial, we'll create a Kubernetes v1.25.0 cluster on DigitalOcean with Fedora CoreOS.
In this tutorial, we'll create a Kubernetes v1.26.2 cluster on DigitalOcean with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -55,7 +55,7 @@ terraform {
}
digitalocean = {
source = "digitalocean/digitalocean"
version = "2.22.1"
version = "2.22.3"
}
}
}
@ -81,7 +81,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubern
```tf
module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.26.2"
# Digital Ocean
cluster_name = "nemo"
@ -155,9 +155,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.25.0
10.132.115.81 Ready <none> 10m v1.25.0
10.132.124.107 Ready <none> 10m v1.25.0
10.132.110.130 Ready <none> 10m v1.26.2
10.132.115.81 Ready <none> 10m v1.26.2
10.132.124.107 Ready <none> 10m v1.26.2
```
List the pods.

View File

@ -1,6 +1,6 @@
# Google Cloud
In this tutorial, we'll create a Kubernetes v1.25.0 cluster on Google Compute Engine with Fedora CoreOS.
In this tutorial, we'll create a Kubernetes v1.26.2 cluster on Google Compute Engine with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -56,7 +56,7 @@ terraform {
}
google = {
source = "hashicorp/google"
version = "4.33.0"
version = "4.37.0"
}
}
}
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-coreos/kuberne
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.2"
# Google Cloud
cluster_name = "yavin"
@ -147,9 +147,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.25.0
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.25.0
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.25.0
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.26.2
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.26.2
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.26.2
```
List the pods.

View File

@ -1,6 +1,6 @@
# AWS
In this tutorial, we'll create a Kubernetes v1.25.0 cluster on AWS with Flatcar Linux.
In this tutorial, we'll create a Kubernetes v1.26.2 cluster on AWS with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -55,7 +55,7 @@ terraform {
}
aws = {
source = "hashicorp/aws"
version = "4.28.0"
version = "4.31.0"
}
}
}
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/flatcar-linux/kubernetes`.
```tf
module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.26.2"
# AWS
cluster_name = "tempest"
@ -145,9 +145,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.25.0
ip-10-0-26-65 Ready <none> 10m v1.25.0
ip-10-0-41-21 Ready <none> 10m v1.25.0
ip-10-0-3-155 Ready <none> 10m v1.26.2
ip-10-0-26-65 Ready <none> 10m v1.26.2
ip-10-0-41-21 Ready <none> 10m v1.26.2
```
List the pods.

View File

@ -1,6 +1,6 @@
# Azure
In this tutorial, we'll create a Kubernetes v1.25.0 cluster on Azure with Flatcar Linux.
In this tutorial, we'll create a Kubernetes v1.26.2 cluster on Azure with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
@ -52,7 +52,7 @@ terraform {
}
azurerm = {
source = "hashicorp/azurerm"
version = "3.20.0"
version = "3.23.0"
}
}
}
@ -65,8 +65,8 @@ Additional configuration options are described in the `azurerm` provider [docs](
Flatcar Linux publishes images to the Azure Marketplace and requires accepting terms.
```
az vm image terms show --publish kinvolk --offer flatcar-container-linux-free --plan stable
az vm image terms accept --publish kinvolk --offer flatcar-container-linux-free --plan stable
az vm image terms accept --publish kinvolk --offer flatcar-container-linux-free --plan stable-gen2
```
## Cluster
@ -75,7 +75,7 @@ Define a Kubernetes cluster using the module `azure/flatcar-linux/kubernetes`.
```tf
module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.26.2"
# Azure
cluster_name = "ramius"
@ -149,9 +149,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.25.0
ramius-worker-000001 Ready <none> 25m v1.25.0
ramius-worker-000002 Ready <none> 24m v1.25.0
ramius-controller-0 Ready <none> 24m v1.26.2
ramius-worker-000001 Ready <none> 25m v1.26.2
ramius-worker-000002 Ready <none> 24m v1.26.2
```
List the pods.
@ -227,7 +227,7 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
| worker_type | Machine type for workers | "Standard_DS1_v2" | See below |
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
| disk_size | Size of the disk in GB | 30 | 100 |
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |

View File

@ -1,6 +1,6 @@
# Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.25.0 cluster on bare-metal with Flatcar Linux.
In this tutorial, we'll network boot and provision a Kubernetes v1.26.2 cluster on bare-metal with Flatcar Linux.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/flatcar-linux/kubernete
```tf
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.26.2"
# bare-metal
cluster_name = "mercury"
@ -190,6 +190,36 @@ module "mercury" {
}
```
Workers with similar features can be defined inline using the `workers` field as shown above. It's also possible to define discrete workers that attach to the cluster. Discrete workers are more advanced, but more verbose.
```tf
module "mercury-node1" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.26.2"
# bare-metal
cluster_name = "mercury"
matchbox_http_endpoint = "http://matchbox.example.com"
os_channel = "flatcar-stable"
os_version = "2345.3.1"
# configuration
name = "node2"
mac = "52:54:00:b2:2f:86"
domain = "node2.example.com"
kubeconfig = module.mercury.kubeconfig
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
snippets = []
node_labels = []
node_tains = []
install_disk = "/dev/vda"
cached_install = false
}
...
```
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-metal/flatcar-linux/kubernetes/variables.tf) source.
## ssh-agent
@ -293,9 +323,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.25.0
node2.example.com Ready <none> 10m v1.25.0
node3.example.com Ready <none> 10m v1.25.0
node1.example.com Ready <none> 10m v1.26.2
node2.example.com Ready <none> 10m v1.26.2
node3.example.com Ready <none> 10m v1.26.2
```
List the pods.
@ -335,12 +365,12 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3Nz..." |
| controllers | List of controller machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node1", mac="52:54:00:a1:9c:ae", domain="node1.example.com"}]` |
| workers | List of worker machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node2", mac="52:54:00:b2:2f:86", domain="node2.example.com"}, {name="node3", mac="52:54:00:c3:61:77", domain="node3.example.com"}]` |
### Optional
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| workers | List of worker machine detail objects (unique name, identifying MAC address, FQDN) | [] | `[{name="node2", mac="52:54:00:b2:2f:86", domain="node2.example.com"}, {name="node3", mac="52:54:00:c3:61:77", domain="node3.example.com"}]` |
| download_protocol | Protocol iPXE uses to download the kernel and initrd. iPXE must be compiled with [crypto](https://ipxe.org/crypto) support for https. Unused if cached_install is true | "https" | "http" |
| cached_install | PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Container Linux or Flatcar images into the cache | false | true |
| install_disk | Disk device where Container Linux should be installed | "/dev/sda" | "/dev/sdb" |

View File

@ -1,6 +1,6 @@
# DigitalOcean
In this tutorial, we'll create a Kubernetes v1.25.0 cluster on DigitalOcean with Flatcar Linux.
In this tutorial, we'll create a Kubernetes v1.26.2 cluster on DigitalOcean with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -55,7 +55,7 @@ terraform {
}
digitalocean = {
source = "digitalocean/digitalocean"
version = "2.22.1"
version = "2.22.3"
}
}
}
@ -81,7 +81,7 @@ Define a Kubernetes cluster using the module `digital-ocean/flatcar-linux/kubern
```tf
module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.26.2"
# Digital Ocean
cluster_name = "nemo"
@ -155,9 +155,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.25.0
10.132.115.81 Ready <none> 10m v1.25.0
10.132.124.107 Ready <none> 10m v1.25.0
10.132.110.130 Ready <none> 10m v1.26.2
10.132.115.81 Ready <none> 10m v1.26.2
10.132.124.107 Ready <none> 10m v1.26.2
```
List the pods.

View File

@ -1,6 +1,6 @@
# Google Cloud
In this tutorial, we'll create a Kubernetes v1.25.0 cluster on Google Compute Engine with Flatcar Linux.
In this tutorial, we'll create a Kubernetes v1.26.2 cluster on Google Compute Engine with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -56,7 +56,7 @@ terraform {
}
google = {
source = "hashicorp/google"
version = "4.33.0"
version = "4.37.0"
}
}
}
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/flatcar-linux/kuberne
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.26.2"
# Google Cloud
cluster_name = "yavin"
@ -147,9 +147,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.25.0
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.25.0
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.25.0
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.26.2
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.26.2
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.26.2
```
List the pods.

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.25.0 (upstream)
* Kubernetes v1.26.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](advanced/worker-pools/), [preemptible](fedora-coreos/google-cloud/#preemption) workers, and [snippets](advanced/customization/#hosts) customization
@ -48,6 +48,7 @@ Typhoon is available for [Flatcar Linux](https://www.flatcar-linux.org/releases/
| Platform | Operating System | Terraform Module | Status |
|---------------|------------------|------------------|--------|
| AWS | Flatcar Linux (ARM64) | [aws/flatcar-linux/kubernetes](advanced/arm64.md) | alpha |
| Azure | Flatcar Linux (ARM64) | [azure/flatcar-linux/kubernetes](advanced/arm64.md) | alpha |
## Documentation
@ -61,7 +62,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.2"
# Google Cloud
cluster_name = "yavin"
@ -99,9 +100,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.25.0
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.25.0
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.25.0
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.26.2
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.26.2
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.26.2
```
List the pods.

View File

@ -13,12 +13,12 @@ Typhoon provides tagged releases to allow clusters to be versioned using ordinar
```
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.2"
...
}
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.25.0"
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.26.2"
...
}
```
@ -192,7 +192,7 @@ Applying edits to most worker fields will start an instance refresh:
However, changing `os_stream`/`os_channel` or new AMIs becoming available will NOT change the launch configuration or trigger an Instance Refresh. This allows Fedora CoreOS or Flatcar Linux to auto-update themselves via reboots and avoids unexpected terraform diffs for new AMIs.
!!! note
Before Typhoon v1.25.0, worker nodes only used new launch configurations when replaced manually (or due to failure). If you must change node configuration manually, it's still possible. Create a new [worker pool](../advanced/worker-pools.md), then scale down the old worker pool as desired.
Before Typhoon v1.26.2, worker nodes only used new launch configurations when replaced manually (or due to failure). If you must change node configuration manually, it's still possible. Create a new [worker pool](../advanced/worker-pools.md), then scale down the old worker pool as desired.
### Google Cloud
@ -233,7 +233,7 @@ Applying edits to most worker fields will start an instance refresh:
However, changing `os_stream`/`os_channel` or new compute images becoming available will NOT change the launch template or update instances. This allows Fedora CoreOS or Flatcar Linux to auto-update themselves via reboots and avoids unexpected terraform diffs for new AMIs.
!!! note
Before Typhoon v1.25.0, worker nodes only used new launch templates when replaced manually (or due to failure). If you must change node configuration manually, it's still possible. Create a new [worker pool](../advanced/worker-pools.md), then scale down the old worker pool as desired.
Before Typhoon v1.26.2, worker nodes only used new launch templates when replaced manually (or due to failure). If you must change node configuration manually, it's still possible. Create a new [worker pool](../advanced/worker-pools.md), then scale down the old worker pool as desired.
## Upgrade poseidon/ct

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.25.0 (upstream)
* Kubernetes v1.26.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/fedora-coreos/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=3fa08c542c75d5e62ebab2bd28e0a7392f3ffb46"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=607a05692bb213cb2e50e15d854f68b1e11c0492"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -9,10 +9,10 @@ systemd:
[Unit]
Description=etcd (System Container)
Documentation=https://github.com/etcd-io/etcd
Wants=network-online.target network.target
Wants=network-online.target
After=network-online.target
[Service]
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
Type=exec
ExecStartPre=/bin/mkdir -p /var/lib/etcd
ExecStartPre=-/usr/bin/podman rm etcd
@ -54,7 +54,7 @@ systemd:
Description=Kubelet (System Container)
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -111,7 +111,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \
quay.io/poseidon/kubelet:v1.25.0
quay.io/poseidon/kubelet:v1.26.2
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap
storage:
@ -145,8 +145,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -26,7 +26,7 @@ systemd:
Description=Kubelet (System Container)
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -72,19 +72,6 @@ systemd:
RestartSec=10
[Install]
WantedBy=multi-user.target
- name: delete-node.service
enabled: true
contents: |
[Unit]
Description=Delete Kubernetes node on shutdown
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
[Install]
WantedBy=multi-user.target
storage:
directories:
- path: /etc/kubernetes
@ -113,8 +100,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.25.0 (upstream)
* Kubernetes v1.26.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=3fa08c542c75d5e62ebab2bd28e0a7392f3ffb46"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=607a05692bb213cb2e50e15d854f68b1e11c0492"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -11,7 +11,7 @@ systemd:
Requires=docker.service
After=docker.service
[Service]
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
ExecStartPre=/usr/bin/docker run -d \
--name etcd \
--network host \
@ -56,7 +56,7 @@ systemd:
After=docker.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -105,7 +105,7 @@ systemd:
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \
@ -145,8 +145,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -28,7 +28,7 @@ systemd:
After=docker.service
Wants=rpc-statd.service
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.2
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -74,19 +74,6 @@ systemd:
RestartSec=5
[Install]
WantedBy=multi-user.target
- name: delete-node.service
enabled: true
contents: |
[Unit]
Description=Delete Kubernetes node on shutdown
[Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.0
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
[Install]
WantedBy=multi-user.target
storage:
files:
- path: /etc/kubernetes/kubeconfig
@ -113,8 +100,6 @@ storage:
- ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix}
healthzPort: 0
featureGates:
LocalStorageCapacityIsolationFSQuotaMonitoring: false
rotateCertificates: true
shutdownGracePeriod: 45s
shutdownGracePeriodCriticalPods: 30s

View File

@ -1,4 +1,4 @@
mkdocs==1.3.1
mkdocs-material==8.4.2
pygments==2.13.0
pymdown-extensions==9.5
mkdocs==1.4.2
mkdocs-material==9.0.15
pygments==2.14.0
pymdown-extensions==9.9.2