Skip to main content

AWS Hosting Options

Here's what the AWS related hosting options looks like, with the default values or [required] for property values that must be specified:

hosting:
environment: aws
aws:
accessKeyId: [required]
availabilityZone: [required]
controlPlanePlacementPartitions: -1
defaultInstanceType: c5.2xlarge
defaultEbsOptimized: false
defaultVolumeSize: "128 GiB"
defaultVolumeType: gp2
defaultOpenEbsVolumeSize: "128 GiB"
defaultOpenEbsVolumeType: gp2
network:
elasticIpEgressId: null
elasticIpIngressId: null
vpcSubnet: "10.100.0.0/16"
nodeSubnet: "10.100.0.0/24"
publicSubnet: "10.100.255.0/24"
resourceGroup: null
secretAccessKey: [required]
workerPlacementPartitions: 1
PropertyDescription
accessKeyId

string: Specifies the AWS access key ID that identifies the IAM key created for the IAM user assigned to NeonKUBE for management activities, including creating the cluster. This combined with SecretAccessKey will be used to confirm the identity. This is required.

availabilityZone

string: Specifies the AWS zone where the cluster will be provisioned. This is required.

controlPlanePlacementPartitions

integer: Specifies the number of control-plane placement group partitions the cluster control-plane node instances will be deployed to. This defaults to -1 which means that the number of partitions will equal the number of control-plane nodes. AWS supports a maximum of 7 placement partitions.

AWS provides three different types of placement groups to user help manage where virtual machine instances are provisioned within an AWS availability zone to customize fault tolerance due to AWS hardware failures: AWS Placement groups

NeonKUBE provisions instances using two partition placement groups, one for the cluster control-plane nodes and the other for the workers. The idea is that control-plane nodes should be deployed on separate hardware for fault tolerance because if the majority of control-plane nodes go offline, the entire cluster will be dramatically impacted. In general, the number of controlPlanePlacementPartitions partitions should equal the number of control-plane nodes.

Worker nodes are distributed across workerPlacementPartitions partitions in a separate placement group. The number of worker partitions defaults to 1, potentially limiting the resilience to AWS hardware failures while making it more likely that AWS will be able to actually satisfy the conditions to provision the cluster node instances.

Unfortunately, AWS may not have enough distinct hardware available to satisfy your requirements. In this case, we recommend that you try another availability zone first and if that doesn't work try reducing the number of partitions which can be as low as 1 partition.

defaultInstanceType

string: Identifies the default AWS instance type to be provisioned for cluster nodes that don't specify an instance type. This defaults to c5.2xlarge which includes 8 virtual cores and 16 GiB RAM but can be overridden for specific cluster nodes.

NOTE: NeonKUBE clusters cannot be deployed to ARM-based AWS instance types. You must specify an instance type using a Intel or AMD 64-bit processor.

NOTE: NeonKUBE requires control-plane and worker instances to have at least 4 CPUs and 8GiB RAM. Choose an AWS instance type that satisfies these requirements.

defaultEbsOptimized

bool: Specifies whether the cluster instances should be EBS-optimized by default. This defaults to false and can be overidden for specific cluster nodes. See this for more information: Amazon EBS�optimized instances

Non EBS optimized instances perform disk operation I/O to EBS volumes using the same network used for other network operations. This means that you may see some disk performance issues when your instance is busy serving web traffic or running database queries, etc.

EBS optimization can be enabled for some instance types. This provisions extra dedicated network bandwidth exclusively for EBS I/O. Exactly how this works, depends on the specific VM type.

More modern AWS VM types enable EBS optimization by default and you won't incur any additional charges for these instances and disabling EBS optimization won't have any effect.

Some AWS instance types can be optimized but this is disabled by default. When you enable this, you'll probably an additional AWS hourly fee for these instances.

Some AWS instance types don't support EBS optimization. You'll need to be sure that this is disabled for those nodes.

defaultVolumeSize

string: Specifies the default AWS volume size for the cluster node primary disks. This defaults to 128 GiB but can be overridden for specific cluster nodes.

NOTE: Node disks smaller than 32 GiB are not supported by NeonKUBE. We'll automatically round up the disk size when necessary.

defaultVolumeType

string: Specifies the default AWS volume type for cluster node primary disks. This defaults to gp2 which is SSD based and offers a reasonable compromise between performance and cost. This can be overriden for specific cluster nodes.

defaultOpenEbsVolumeSize

string: Specifies the default AWS volume size to be used when creating OpenEBS cStor disks. This defaults to 128 GiB but can be overridden for specific cluster nodes.

NOTE: Node disks smaller than 32 GiB are not supported by NeonKUBE. We'll automatically round up the disk size when necessary.

defaultOpenEbsVolumeType

string: Specifies the default AWS volume type to use for OpenEBS cStor disks. This defaults to gp2 which is SSD based and offers a reasonable compromise between performance and cost. This can be overridden for specific cluster nodes.

network

object: Specifies the AWS related cluster network options.

PropertyDescription
elasticIpEgressId

string: Optionally specifies an existing Elastic IP address to be used by the cluster load balancer for receiving network traffic. Set this to your Elastic IP allocation ID (something like eipalloc-080a1efa9c04ad72). This is useful for ensuring that your cluster is always provisioned with a known static IP.

NOTE: When this isn't specified, the cluster will be configured with new Elastic IPs that will be released when the cluster is deleted.

NOTE: elasticIpIngressId and ElasticIpEgressId must be specified together or not at all.

elasticIpIngressId

string: Optionally specifies an existing Elastic IP address to be used by the cluster load balancer for sending network traffic. Set this to your Elastic IP allocation ID (something like eipalloc-080a1efa9c04ad88). This is useful for ensuring that your cluster is always provisioned with a known static IP.

NOTE: When this isn't specified, the cluster will be configured with new Elastic IPs that will be released when the cluster is deleted.

NOTE: elasticIpIngressId and ElasticIpEgressId must be specified together or not at all.

vpcSubnet

string: Specifies the subnet CIDR to used for AWS VPC (virtual private cloud) provisioned for the cluster. This must surround the nodeSubnet and publicSubnet subnets. This defaults to 10.100.0.0/16.

privateSubnet

string: Specifies the private subnet CIDR within vpcSubnet for the private subnet where the cluster node instances will be provisioned. This defaults to 10.100.0.0/24.

publicSubnet

string: Specifies the public subnet CIDR within vpcSubnet for the public subnet where the AWS network load balancer will be provisioned to manage inbound cluster traffic. This defaults to 10.100.255.0/16.

resourceGroup

string: Specifies the AWS resource group where all cluster components are to be provisioned. This defaults to "neon-" plus the cluster name but can be customized as required.

secretAccessKey

string: Specifies the AWS secret used to authenticate the AccessKeyId. This is required.

workerPlacementPartitions

integer: Specifies the number of worker placement group partitions the cluster worker node instances will be deployed to. This defaults to 1 trading off resilience to hardware failures against increasing the chance that AWS will actually be able to provision the cluster nodes. AWS supports a maximum of 7 placement partitions.

AWS provides three different types of placement groups to user help manage where virtual machine instances are provisioned within an AWS availability zone to customize fault tolerance due to AWS hardware failures. See: AWS Placement groups

NeonKUBE provisions instances using two partition placement groups, one for the cluster control-plane nodes and the other for the workers. The idea is that control-plane nodes should be deployed on separate hardware for fault tolerance because if the majority of control-plane nodes go offline, the entire cluster will be dramatically impacted. In general, the number of c>ontrolPlanePlacementPartitions partitions should equal the number of control-plane nodes.

Worker nodes are distributed across see WorkerPlacementPartitions partitions in a separate placement group. The number of worker partitions defaults to 1, potentially limiting the resilience to AWS hardware failures while making it more likely that AWS will be able to actually satisfy the conditions to provision the cluster node instances.

Unfortunately, AWS may not have enough distinct hardware available to satisfy your requirements. In this case, we recommend that you try another availability zone first and if that doesn't work try reducing the number of partitions which can be as low as 1 partition.