mirror of
https://github.com/puppetmaster/typhoon.git
synced 2024-12-28 15:59:33 +01:00
ad2e4311d1
* Allow multi-controller clusters on Google Cloud * GCP regional network load balancers have a long open bug in which requests originating from a backend instance are routed to the instance itself, regardless of whether the health check passes or not. As a result, only the 0th controller node registers. We've recommended just using single master GCP clusters for a while * https://issuetracker.google.com/issues/67366622 * Workaround issue by switching to a GCP TCP Proxy load balancer. TCP proxy lb routes traffic to a backend service (global) of instance group backends. In our case, spread controllers across 3 zones (all regions have 3+ zones) and organize them in 3 zonal unmanaged instance groups that serve as backends. Allows multi-controller cluster creation * GCP network load balancers only allowed legacy HTTP health checks so kubelet 10255 was checked as an approximation of controller health. Replace with TCP apiserver health checks to detect unhealth or unresponsive apiservers. * Drawbacks: GCP provision time increases, tailed logs now timeout (similar tradeoff in AWS), controllers only span 3 zones instead of the exact number in the region * Workaround in Typhoon has been known and posted for 5 months, but there still appears to be no better alternative. Its probably time to support multi-master and accept the downsides
43 lines
2.2 KiB
Markdown
43 lines
2.2 KiB
Markdown
# Performance
|
|
|
|
## Provision Time
|
|
|
|
Provisioning times vary based on the platform. Sampling the time to create (apply) and destroy clusters with 1 controller and 2 workers shows (roughly) what to expect.
|
|
|
|
| Platform | Apply | Destroy |
|
|
|---------------|-------|---------|
|
|
| AWS | 6 min | 5 min |
|
|
| Bare-Metal | 10-14 min | NA |
|
|
| Digital Ocean | 3 min 30 sec | 20 sec |
|
|
| Google Cloud | 7 min | 4 min 30 sec |
|
|
|
|
Notes:
|
|
|
|
* SOA TTL and NXDOMAIN caching can have a large impact on provision time
|
|
* Platforms with auto-scaling take more time to provision (AWS, Google)
|
|
* Bare-metal POST times and network bandwidth will affect provision times
|
|
|
|
## Network Performance
|
|
|
|
Network performance varies based on the platform and CNI plugin. `iperf` was used to measure the bandwidth between different hosts and different pods. Host-to-host shows typical bandwidth between host machines. Pod-to-pod shows the bandwidth between two `iperf` containers.
|
|
|
|
| Platform / Plugin | Theory | Host to Host | Pod to Pod |
|
|
|----------------------------|-------:|-------------:|-------------:|
|
|
| AWS (flannel) | ? | 976 MB/s | 900-999 MB/s |
|
|
| AWS (calico, MTU 1480) | ? | 976 MB/s | 100-350 MB/s |
|
|
| AWS (calico, MTU 8991) | ? | 976 MB/s | 900-999 MB/s |
|
|
| Bare-Metal (flannel) | 1 GB/s | 934 MB/s | 903 MB/s |
|
|
| Bare-Metal (calico) | 1 GB/s | 941 MB/s | 931 MB/s |
|
|
| Bare-Metal (flannel, bond) | 3 GB/s | 2.3 GB/s | 1.17 GB/s |
|
|
| Bare-Metal (calico, bond) | 3 GB/s | 2.3 GB/s | 1.17 GB/s |
|
|
| Digital Ocean | ? | 938 MB/s | 820-880 MB/s |
|
|
| Google Cloud (flannel) | ? | 1.94 GB/s | 1.76 GB/s |
|
|
| Google Cloud (calico) | ? | 1.94 GB/s | 1.81 GB/s |
|
|
|
|
Notes:
|
|
|
|
* Calico and Flannel have comparable performance. Platform and configuration differences dominate.
|
|
* Neither CNI provider seems to be able to leverage bonded NICs (bare-metal)
|
|
* AWS and Digital Ocean network bandwidth fluctuates more than on other platforms.
|
|
* Only [certain AWS EC2 instance types](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) allow jumbo frames. This is why the default MTU on AWS must be 1480.
|