mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-07-25 00:11:38 +02:00
Add new tutorial docs and links
This commit is contained in:
@ -8,7 +8,7 @@ Formats rise and evolve. Typhoon may choose to adapt the format over time (with
|
||||
|
||||
## Operating Systems
|
||||
|
||||
Typhoon supports Container Linux and Fedora Atomic 27. These two operating systems were chosen because they offer:
|
||||
Typhoon supports Container Linux and Fedora Atomic 28. These two operating systems were chosen because they offer:
|
||||
|
||||
* Minimalism and focus on clustered operation
|
||||
* Automated and atomic operating system upgrades
|
||||
|
@ -7,14 +7,15 @@ Provisioning times vary based on the operating system and platform. Sampling the
|
||||
| Platform | Apply | Destroy |
|
||||
|---------------|-------|---------|
|
||||
| AWS | 6 min | 5 min |
|
||||
| Azure | 7 min | 7 min |
|
||||
| Bare-Metal | 10-15 min | NA |
|
||||
| Digital Ocean | 3 min 30 sec | 20 sec |
|
||||
| Google Cloud | 6 min | 4 min 30 sec |
|
||||
| Google Cloud | 7 min | 6 min |
|
||||
|
||||
Notes:
|
||||
|
||||
* SOA TTL and NXDOMAIN caching can have a large impact on provision time
|
||||
* Platforms with auto-scaling take more time to provision (AWS, Google)
|
||||
* Platforms with auto-scaling take more time to provision (AWS, Azure, Google)
|
||||
* Bare-metal POST times and network bandwidth will affect provision times
|
||||
|
||||
## Network Performance
|
||||
@ -26,17 +27,19 @@ Network performance varies based on the platform and CNI plugin. `iperf` was use
|
||||
| AWS (flannel) | ? | 976 MB/s | 900-999 MB/s |
|
||||
| AWS (calico, MTU 1480) | ? | 976 MB/s | 100-350 MB/s |
|
||||
| AWS (calico, MTU 8981) | ? | 976 MB/s | 900-999 MB/s |
|
||||
| Bare-Metal (flannel) | 1 GB/s | 934 MB/s | 903 MB/s |
|
||||
| Bare-Metal (calico) | 1 GB/s | 941 MB/s | 931 MB/s |
|
||||
| Azure (flannel) | ? | 749 MB/s | 680 MB/s |
|
||||
| Bare-Metal (flannel) | 1 GB/s | ~940 MB/s | 903 MB/s |
|
||||
| Bare-Metal (calico) | 1 GB/s | ~940 MB/s | 931 MB/s |
|
||||
| Bare-Metal (flannel, bond) | 3 GB/s | 2.3 GB/s | 1.17 GB/s |
|
||||
| Bare-Metal (calico, bond) | 3 GB/s | 2.3 GB/s | 1.17 GB/s |
|
||||
| Digital Ocean | ? | 938 MB/s | 820-880 MB/s |
|
||||
| Digital Ocean | ? | ~940 MB/s | 820-880 MB/s |
|
||||
| Google Cloud (flannel) | ? | 1.94 GB/s | 1.76 GB/s |
|
||||
| Google Cloud (calico) | ? | 1.94 GB/s | 1.81 GB/s |
|
||||
|
||||
Notes:
|
||||
|
||||
* Calico and Flannel have comparable performance. Platform and configuration differences dominate.
|
||||
* Neither CNI provider seems to be able to leverage bonded NICs (bare-metal)
|
||||
* AWS and Digital Ocean network bandwidth fluctuates more than on other platforms.
|
||||
* AWS and Azure node bandwidth (i.e. upper bound) depends greatly on machine type
|
||||
* Only [certain AWS EC2 instance types](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) allow jumbo frames. This is why the default MTU on AWS must be 1480.
|
||||
* Neither CNI provider seems to be able to leverage bonded NICs well (bare-metal)
|
||||
|
||||
|
@ -12,7 +12,7 @@ Typhoon aims to be minimal and secure. We're running it ourselves after all.
|
||||
* Workloads run on worker nodes only, unless they tolerate the master taint
|
||||
* Kubernetes [Network Policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) and Calico [Policy](https://docs.projectcalico.org/latest/reference/calicoctl/resources/policy) support [^1]
|
||||
|
||||
[^1]: Requires `networking = "calico"`. Calico is the default on AWS, bare-metal, and Google Cloud. Digital Ocean is limited to `networking = "flannel"`.
|
||||
[^1]: Requires `networking = "calico"`. Calico is the default on AWS, bare-metal, and Google Cloud. Azure and Digital Ocean are limited to `networking = "flannel"`.
|
||||
|
||||
**Hosts**
|
||||
|
||||
@ -24,11 +24,13 @@ Typhoon aims to be minimal and secure. We're running it ourselves after all.
|
||||
* Cloud firewalls limit access to ssh, kube-apiserver, and ingress
|
||||
* No cluster credentials are stored in Matchbox (used for bare-metal)
|
||||
* No cluster credentials are stored in Digital Ocean metadata
|
||||
* Cluster credentials are stored in Google Cloud metadata (for managed instance groups)
|
||||
* Cluster credentials are stored in AWS metadata (for ASGs)
|
||||
* No account credentials are available to Google Cloud instances (no IAM permissions)
|
||||
* No account credentials are available to AWS EC2 instances (no IAM permissions)
|
||||
* Cluster credentials are stored in Azure metadata (for scale sets)
|
||||
* Cluster credentials are stored in Google Cloud metadata (for managed instance groups)
|
||||
* No account credentials are available to Digital Ocean droplets
|
||||
* No account credentials are available to AWS EC2 instances (no IAM permissions)
|
||||
* No account credentials are available to Azure instances (no IAM permissions)
|
||||
* No account credentials are available to Google Cloud instances (no IAM permissions)
|
||||
|
||||
## Precautions
|
||||
|
||||
|
Reference in New Issue
Block a user