AWS Hosting Options
Here's what the AWS related hosting options looks like, with the default values or [required] for property values that must be specified:
hosting:
environment: aws
aws:
accessKeyId: [required]
availabilityZone: [required]
controlPlanePlacementPartitions: -1
defaultInstanceType: c5.2xlarge
defaultEbsOptimized: false
defaultVolumeSize: "128 GiB"
defaultVolumeType: gp2
defaultOpenEbsVolumeSize: "128 GiB"
defaultOpenEbsVolumeType: gp2
network:
elasticIpEgressId: null
elasticIpIngressId: null
vpcSubnet: "10.100.0.0/16"
nodeSubnet: "10.100.0.0/24"
publicSubnet: "10.100.255.0/24"
resourceGroup: null
secretAccessKey: [required]
workerPlacementPartitions: 1
Property | Description | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
accessKeyId |
| ||||||||||||
availabilityZone |
| ||||||||||||
controlPlanePlacementPartitions |
AWS provides three different types of placement groups to user help manage where virtual machine instances are provisioned within an AWS availability zone to customize fault tolerance due to AWS hardware failures: AWS Placement groups NeonKUBE provisions instances using two partition placement groups, one for the cluster control-plane nodes and the other for the workers. The idea is that control-plane nodes should be deployed on separate hardware for fault tolerance because if the majority of control-plane nodes go offline, the entire cluster will be dramatically impacted. In general, the number of controlPlanePlacementPartitions partitions should equal the number of control-plane nodes. Worker nodes are distributed across workerPlacementPartitions partitions in a separate placement group. The number of worker partitions defaults to 1, potentially limiting the resilience to AWS hardware failures while making it more likely that AWS will be able to actually satisfy the conditions to provision the cluster node instances. Unfortunately, AWS may not have enough distinct hardware available to satisfy your requirements. In this case, we recommend that you try another availability zone first and if that doesn't work try reducing the number of partitions which can be as low as 1 partition. | ||||||||||||
defaultInstanceType |
NOTE: NeonKUBE clusters cannot be deployed to ARM-based AWS instance types. You must specify an instance type using a Intel or AMD 64-bit processor. NOTE: NeonKUBE requires control-plane and worker instances to have at least 4 CPUs and 8GiB RAM. Choose an AWS instance type that satisfies these requirements. | ||||||||||||
defaultEbsOptimized |
Non EBS optimized instances perform disk operation I/O to EBS volumes using the same network used for other network operations. This means that you may see some disk performance issues when your instance is busy serving web traffic or running database queries, etc. EBS optimization can be enabled for some instance types. This provisions extra dedicated network bandwidth exclusively for EBS I/O. Exactly how this works, depends on the specific VM type. More modern AWS VM types enable EBS optimization by default and you won't incur any additional charges for these instances and disabling EBS optimization won't have any effect. Some AWS instance types can be optimized but this is disabled by default. When you enable this, you'll probably an additional AWS hourly fee for these instances. Some AWS instance types don't support EBS optimization. You'll need to be sure that this is disabled for those nodes. | ||||||||||||
defaultVolumeSize |
NOTE: Node disks smaller than 32 GiB are not supported by NeonKUBE. We'll automatically round up the disk size when necessary. | ||||||||||||
defaultVolumeType |
| ||||||||||||
defaultOpenEbsVolumeSize |
NOTE: Node disks smaller than 32 GiB are not supported by NeonKUBE. We'll automatically round up the disk size when necessary. | ||||||||||||
defaultOpenEbsVolumeType |
| ||||||||||||
network |
| ||||||||||||
resourceGroup |
| ||||||||||||
secretAccessKey |
| ||||||||||||
workerPlacementPartitions |
AWS provides three different types of placement groups to user help manage where virtual machine instances are provisioned within an AWS availability zone to customize fault tolerance due to AWS hardware failures. See: AWS Placement groups NeonKUBE provisions instances using two partition placement groups, one for the cluster control-plane nodes and the other for the workers. The idea is that control-plane nodes should be deployed on separate hardware for fault tolerance because if the majority of control-plane nodes go offline, the entire cluster will be dramatically impacted. In general, the number of c>ontrolPlanePlacementPartitions partitions should equal the number of control-plane nodes. Worker nodes are distributed across see WorkerPlacementPartitions partitions in a separate placement group. The number of worker partitions defaults to 1, potentially limiting the resilience to AWS hardware failures while making it more likely that AWS will be able to actually satisfy the conditions to provision the cluster node instances. Unfortunately, AWS may not have enough distinct hardware available to satisfy your requirements. In this case, we recommend that you try another availability zone first and if that doesn't work try reducing the number of partitions which can be as low as 1 partition. |