cluster
You can define one or more clusters for different types of jobs or workloads.
Each cluster has it’s own configuration based on your needs.
The format is [cluster <clustername>].
key_name
Name of an existing EC2 KeyPair to enable SSH access to the instances.
template_url
Overrides the path to the CloudFormation template used to create the cluster
Defaults to
https://s3.amazonaws.com/<aws_region_name>-aws-parallelcluster/templates/aws-parallelcluster-<version>.cfn.json
.
template_url = https://s3.amazonaws.com/us-east-1-aws-parallelcluster/templates/aws-parallelcluster.cfn.json
compute_instance_type
The EC2 instance type used for the cluster compute nodes.
If you’re using awsbatch, please refer to the Compute Environments creation in the AWS Batch UI for the list of the
supported instance types.
Defaults to t2.micro, optimal
when scheduler is awsbatch
compute_instance_type = t2.micro
master_instance_type
The EC2 instance type use for the master node.
This defaults to t2.micro.
master_instance_type = t2.micro
initial_queue_size
The initial number of EC2 instances to launch as compute nodes in the cluster for traditional schedulers.
If you’re using awsbatch, use min_vcpus.
The default is 2.
max_queue_size
The maximum number of EC2 instances that can be launched in the cluster for traditional schedulers.
If you’re using awsbatch, use max_vcpus.
This defaults to 10.
maintain_initial_size
Boolean flag to set autoscaling group to maintain initial size for traditional schedulers.
If you’re using awsbatch, use desired_vcpus.
If set to true, the Auto Scaling group will never have fewer members than the value of initial_queue_size. It will
still allow the cluster to scale up to the value of max_queue_size.
Setting to false allows the Auto Scaling group to scale down to 0 members, so resources will not sit idle when they
aren’t needed.
Defaults to false.
maintain_initial_size = false
min_vcpus
If scheduler is awsbatch, the compute environment won’t have fewer than min_vcpus.
Defaults to 0.
desired_vcpus
If scheduler is awsbatch, the compute environment will initially have desired_vcpus
Defaults to 4.
max_vcpus
If scheduler is awsbatch, the compute environment will at most have max_vcpus.
Defaults to 20.
scheduler
Scheduler to be used with the cluster. Valid options are sge, torque, slurm, or awsbatch.
If you’re using awsbatch, please take a look at the networking setup.
Defaults to sge.
cluster_type
Type of cluster to launch i.e. ondemand or spot
Defaults to ondemand.
spot_price
If cluster_type is set to spot, you can optionally set the maximum spot price for the ComputeFleet on traditional
schedulers. If you do not specify a value, you are charged the Spot price, capped at the On-Demand price.
If you’re using awsbatch, use spot_bid_percentage.
See the Spot Bid Advisor for assistance finding a bid price that
meets your needs:
spot_bid_percentage
If you’re using awsbatch as your scheduler, this optional parameter is the on-demand bid percentage. If not specified
you’ll get the current spot market price, capped at the on-demand price.
s3_read_resource
Specify S3 resource for which AWS ParallelCluster nodes will be granted read-only access
For example, ‘arn:aws:s3:::my_corporate_bucket/*’ would provide read-only access to all objects in the
my_corporate_bucket bucket.
See working with S3 for details on format.
Defaults to NONE.
s3_read_write_resource
Specify S3 resource for which AWS ParallelCluster nodes will be granted read-write access
For example, ‘arn:aws:s3:::my_corporate_bucket/Development/*’ would provide read-write access to all objects in the
Development folder of the my_corporate_bucket bucket.
See working with S3 for details on format.
Defaults to NONE.
s3_read_write_resource = NONE
pre_install
URL to a preinstall script. This is executed before any of the boot_as_* scripts are run
This only gets executed on the master node when using awsbatch as your scheduler.
Can be specified in “http://hostname/path/to/script.sh” or “s3://bucketname/path/to/script.sh” format.
Defaults to NONE.
pre_install_args
Quoted list of arguments to be passed to preinstall script
Defaults to NONE.
post_install
URL to a postinstall script. This is executed after any of the boot_as_* scripts are run
This only gets executed on the master node when using awsbatch as your scheduler.
Can be specified in “http://hostname/path/to/script.sh” or “s3://bucketname/path/to/script.sh” format.
Defaults to NONE.
post_install_args
Arguments to be passed to postinstall script
Defaults to NONE.
placement_group
Cluster placement group. The can be one of three values: NONE, DYNAMIC and an existing placement group name. When
DYNAMIC is set, a unique placement group will be created as part of the cluster and deleted when the cluster is deleted.
This does not apply to awsbatch.
Defaults to NONE. More information on placement groups can be found here:
placement
Cluster placement logic. This enables the whole cluster or only compute to use the placement group.
Can be cluster
or compute
.
This does not apply to awsbatch.
Defaults to cluster
.
ephemeral_dir
If instance store volumes exist, this is the path/mountpoint they will be mounted on.
Defaults to /scratch.
shared_dir
Path/mountpoint for shared EBS volume. Do not use this option when using multiple EBS volumes; provide shared_dir under
each EBS section instead
Defaults to /shared. The example below mounts to /myshared. See EBS Section for details on working
with multiple EBS volumes:
encrypted_ephemeral
Encrypted ephemeral drives. In-memory keys, non-recoverable. If true, AWS ParallelCluster will generate an ephemeral
encryption key in memory and using LUKS encryption, encrypt your instance store volumes.
Defaults to false.
encrypted_ephemeral = false
master_root_volume_size
MasterServer root volume size in GB. (AMI must support growroot)
Defaults to 15.
master_root_volume_size = 15
compute_root_volume_size
ComputeFleet root volume size in GB. (AMI must support growroot)
Defaults to 15.
compute_root_volume_size = 15
base_os
OS type used in the cluster
Defaults to alinux. Available options are: alinux, centos6, centos7, ubuntu1404 and ubuntu1604
Note: The base_os determines the username used to log into the cluster.
Supported OS’s by region. Note that commercial is all supported regions such as us-east-1, us-west-2 etc.
============== ====== ============ ============ ============= ============
region alinux centos6 centos7 ubuntu1404 ubuntu1604
============== ====== ============ ============ ============= ============
commercial True True True True True
us-gov-west-1 True False False True True
us-gov-east-1 True False False True True
cn-north-1 True False False True True
cn-northwest-1 True False False False False
============== ====== ============ ============ ============= ============
CentOS 6 & 7: centos
Ubuntu: ubuntu
Amazon Linux: ec2-user
ec2_iam_role
The given name of an existing EC2 IAM Role that will be attached to all
instances in the cluster. Note that the given name of a role and its Amazon
Resource Name (ARN) are different, and the latter can not be used as an argument
to ec2_iam_role.
Defaults to NONE.
additional_cfn_template
An additional CloudFormation template to launch along with the cluster. This allows you to create resources that exist
outside of the cluster but are part of the cluster’s life cycle.
Must be a HTTP URL to a public template with all parameters provided.
Defaults to NONE.
additional_cfn_template = NONE
vpc_settings
Settings section relating to VPC to be used
See VPC Section.
ebs_settings
Settings section relating to EBS volume mounted on the master. When using multiple EBS volumes, enter multiple settings
as a comma separated list. Up to 5 EBS volumes are supported.
See EBS Section.
ebs_settings = custom1, custom2, ...
scaling_settings
Settings section relation to scaling
See Scaling Section.
scaling_settings = custom
efs_settings
Settings section relating to EFS filesystem
See EFS Section.
raid_settings
Settings section relating to RAID drive configuration.
See RAID Section.
EFS
EFS file system configuration settings for the EFS mounted on the master node and compute nodes via nfs4.
[efs customfs]
shared_dir = efs
encrypted = false
performance_mode = generalPurpose
shared_dir
Shared directory that the file system will be mounted to on the master and compute nodes.
This parameter is REQUIRED, the EFS section will only be used if this parameter is specified.
The below example mounts to /efs. Do not use NONE or /NONE as the shared directory.:
encrypted
Whether or not the file system will be encrypted.
Defaults to false.
throughput_mode
The throughput mode for the file system to be created.
There are two throughput modes to choose from for your file system: bursting and provisioned.
Valid Values are provisioned | bursting
throughput_mode = provisioned
provisioned_throughput
The throughput, measured in MiB/s, that you want to provision for a file system that you’re creating.
The limit on throughput is 1024 MiB/s. You can get these limits increased by contacting AWS Support.
Valid Range: Min of 0.0. To use this option, must specify throughput_mode to provisioned
provisioned_throughput = 1024
efs_fs_id
File system ID for an existing file system. Specifying this option will void all other EFS options but shared_dir.
Config sanity will only allow file systems that: have no mount target in the stack’s availability zone
OR have existing mount target in stack’s availability zone with inbound and outbound NFS traffic allowed from 0.0.0.0/0.
Note: sanity check for validating efs_fs_id requires the IAM role to have permission for the following actions:
efs:DescribeMountTargets, efs:DescribeMountTargetSecurityGroups, ec2:DescribeSubnets, ec2:DescribeSecurityGroups.
Please add these permissions to your IAM role, or set sanity_check = false to avoid errors.
CAUTION: having mount target with inbound and outbound NFS traffic allowed from 0.0.0.0/0 will expose the file system
to NFS mounting request from anywhere in the mount target’s availability zone. We recommend not to have a mount target
in stack’s availability zone and let us create the mount target. If you must have a mount target in stack’s
availability zone, consider using a custom security group by providing a vpc_security_group_id option under the
vpc section, adding that security group to the mount target, and turning off config sanity to create the cluster.
Defaults to NONE. Needs to be an available EFS file system: