Re-add Kubelet metadata service dependency on DigitalOcean
* Restore the original special-casing of DigitalOcean Kubelets * Fix node metadata InternalIP being set to the IP of the default gateway on DigitalOcean nodes (regressed in v1.12.3) * Reverts the "pretty" node names on DigitalOcean (worker-2 vs IP) * Closes #424 (full details)
This commit is contained in:
parent
e0bee2e417
commit
3d6a6d4adb
|
@ -18,6 +18,12 @@ Notable changes between versions.
|
|||
* Affects Container Linux and Flatcar Linux install profiles that pull from public images (default). No affect when `cached_install=true` or Fedora Atomic, since those download from Matchbox
|
||||
* Add `download_protocol` variable. Recognizing boot firmware TLS support is difficult in some environments, set the protocol to "http" for the old behavior (discouraged)
|
||||
|
||||
#### DigitalOcean
|
||||
|
||||
* Fix kubelet hostname-override to set node metadata InternalIP correctly ([#424](https://github.com/poseidon/typhoon/issues/424))
|
||||
* Uniquely, DigitalOcean does not resolve hostnames to instance private IPs. Kubelet auto-detect mechanisms require the internal IP be set directly.
|
||||
* Regressed in v1.12.3 ([#337](https://github.com/poseidon/typhoon/pull/337)) which aimed to provide friendly hostname-based node names on DigitalOcean
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update Prometheus from v2.7.1 to [v2.8.0](https://github.com/prometheus/prometheus/releases/tag/v2.8.0)
|
||||
|
|
|
@ -56,9 +56,12 @@ systemd:
|
|||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet via Hyperkube
|
||||
Requires=coreos-metadata.service
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/kubernetes/kubelet.env
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
|
||||
--volume=resolv,kind=host,source=/etc/resolv.conf \
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
|
@ -90,6 +93,7 @@ systemd:
|
|||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
|
|
|
@ -31,9 +31,12 @@ systemd:
|
|||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet via Hyperkube
|
||||
Requires=coreos-metadata.service
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/kubernetes/kubelet.env
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
|
||||
--volume=resolv,kind=host,source=/etc/resolv.conf \
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
|
@ -63,6 +66,7 @@ systemd:
|
|||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
|
|
|
@ -19,9 +19,24 @@ write_files:
|
|||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
|
@ -79,6 +94,7 @@ runcmd:
|
|||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.4"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.14.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
|
|
@ -1,8 +1,23 @@
|
|||
#cloud-config
|
||||
write_files:
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
|
@ -51,6 +66,7 @@ bootcmd:
|
|||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.4"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
|
|
|
@ -152,9 +152,9 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
|||
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
nemo-controller-0 Ready controller,master 10m v1.13.4
|
||||
nemo-worker-0 Ready node 10m v1.13.4
|
||||
nemo-worker-1 Ready node 10m v1.13.4
|
||||
10.132.110.130 Ready controller,master 10m v1.13.4
|
||||
10.132.115.81 Ready node 10m v1.13.4
|
||||
10.132.124.107 Ready node 10m v1.13.4
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -175,7 +175,7 @@ kube-system kube-proxy-k35rc 1/1 Running 0
|
|||
kube-system kube-scheduler-3895335239-2bc4c 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-b7q47 1/1 Running 1 11m
|
||||
kube-system pod-checkpointer-pr1lq 1/1 Running 0 11m
|
||||
kube-system pod-checkpointer-pr1lq-nemo-controller-0 1/1 Running 0 10m
|
||||
kube-system pod-checkpointer-pr1lq-10.132.115.81 1/1 Running 0 10m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
|
|
@ -160,9 +160,9 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
|||
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
nemo-controller-0 Ready controller,master 10m v1.13.4
|
||||
nemo-worker-0 Ready node 10m v1.13.4
|
||||
nemo-worker-1 Ready node 10m v1.13.4
|
||||
10.132.110.130 Ready controller,master 10m v1.13.4
|
||||
10.132.115.81 Ready node 10m v1.13.4
|
||||
10.132.124.107 Ready node 10m v1.13.4
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -183,7 +183,7 @@ kube-system kube-proxy-k35rc 1/1 Running 0
|
|||
kube-system kube-scheduler-3895335239-2bc4c 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-b7q47 1/1 Running 1 11m
|
||||
kube-system pod-checkpointer-pr1lq 1/1 Running 0 11m
|
||||
kube-system pod-checkpointer-pr1lq-nemo-controller-0 1/1 Running 0 10m
|
||||
kube-system pod-checkpointer-pr1lq-10.132.115.81 1/1 Running 0 10m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
|
Loading…
Reference in New Issue