You use the cluster create command to create a Hadoop or HBase cluster.

If the cluster specification does not include the required nodes, for example a master node, the Serengeti Management Server creates the cluster according to the default cluster configuration that Serengeti Management Server deploys.


Mandatory or Optional


--name cluster_name_in_Serengeti


Cluster name.

--networkName management_network_name


Network to use for management traffic in Hadoop clusters.

If you omit any of the optional network parameters, the traffic associated with that parameter is routed on the management network that you specify with the --networkName parameter.

--adminGroupName admin_group_name


Administrative group to use for this cluster as defined in Active Directory or LDAP.



User group to use for this cluster as defined in Active Directory or LDAP.



Name of an application manager other than the default to manage your clusters.

--type cluster_type


Cluster type:

Hadoop (Default)




Do not use if you use the --resume parameter.

Custom password for all the nodes in the cluster.

Passwords must be from 8 to 20 characters, use only visible lower­ASCII characters (no spaces), and must contain at least one uppercase alphabetic character (A - Z), at least one lowercase alphabetic character (a - z), at least one digit (0 - 9), and at least one of the following special characters: _, @, #, $, %, ^, &, *

--specFile spec_file_path


Cluster specification filename. For compute-only clusters, you must revise the spec file to point to an external HDFS.

--distro Hadoop_distro_name


Hadoop distribution for the cluster.

--dsNames datastore_names


Datastore to use to deploy Hadoop cluster in Serengeti. Multiple datastores can be used, separated by comma.

By default, all available datastores are used.

When you specify the --dsNames parameter, the cluster can use only those datastores that you provide in this command.

--hdfsNetworkName hdfs_network_name


Network to use for HDFS traffic in Hadoop clusters.

--mapredNetworkName mapred_network_name


Network to use for MapReduce traffic in Hadoop clusters.

--rpNames resource_pool_name


Resource pool to use for Hadoop clusters. Multiple resource pools can be used, separated by comma.



Do not use if you use the --password parameter .

Recover from a failed deployment process.

--topology topology_type


Topology type for rack awareness: HVE, RACK_AS_RACK, or HOST_AS_RACK.



Confirmation whether to proceed following an error message. If the responses are not specified, you can type y or n.

If you specify y, the cluster creation continues. If you do not specify y, the CLI presents the following prompt after displaying the warning message:

Are you sure you want to continue (Y/N)?



Validation whether to skip cluster configuration.

--skipVcRefresh true


When performing cluster operations in a large vCenter Server environment, refreshing the inventory list may take considerable time. You can improve cluster creation or resumption performance using this parameter.


If Serengeti Management Server shares the vCenter Server environment with other workloads, do not use this parameter. Serengeti Management Server cannot track the resource usage of other product's workloads, and must refresh the inventory list in such circumstances.



Option to create a local yum repository.

--externalMapReduce FQDN_of_Jobtracker/ResourceManager:port


The port number is optional.