sqlf server

A SQLFire server is the main server side component in a SQLFire system that provides connectivity to other servers, peers, and clients in the cluster. It can host data. A server is started using the server utility of the sqlf launcher.

Syntax


To start a SQLFire server :

sqlf server start [-J<vmarg>]* [-dir=<workingdir>] 
                  [-classpath=<classpath>]
                  [-max-heap=<size>] [-initial-heap=<size>]
                  [-server-groups=<groups>]
                  [-run-netserver=<true|false> (default true)]
		 		 [-sync=<true|false> (default true)]
                  [-rebalance] [-config-scripts=<sql-files>]
                  [-init-scripts=<sql-files>]
                  [-bind-address=<address> (default is hostname or localhost 
                    if hostname points to a local loopback address)]
                  [-client-bind-address=<clientaddr> (default is localhost)]
                  [-client-port=<clientport> (default 1527)]
                  [-critical-heap-percentage=<percentage>
                    (default 90 if -max-heap and -initial-heap both are
                     provided, otherwise not configured)]
                  [-eviction-heap-percentage=<eviction-heap-percentage>
                    (default 80% of critical-heap-percentage)]
                  [-mcast-port=<port> (default 10334)]
                  [-mcast-address=<address> (default 239.192.81.1)]
                  [-locators=<addresses>] [-start-locator=<address>]
                  [-host-data=<true|false> (default true)]
                  [-auth-provider=<provider>]
                  [-server-auth-provider=<provider>]
                  [-user=<username>] [-password[=<password>]]
                  [-log-file=<path> (default sqlfserver.log)]
                  [-<prop-name>=<prop-value>]*

To display the status of a running server:

sqlf server status [ -dir=<workingdir> ]

To stop a running server:

sqlf server stop [ -dir=<workingdir> ]

If you started a server from a script or batch file using the -sync=false command, you can use the following command to wait until the server has finished synchronization and finished starting:

sqlf server wait [-J<vmarg>]* [-dir=<workingdir>]

The wait command does not return control until the locator has completed startup.

To stop all running accessor and datastore members in the system, use the following command from within a server working directory:

sqlf shut-down-all

The table describes the options of the sqlf server command. Default values are used if you do not specify an option.

Option Description
-J JVM option passed to the spawned SQLFire server JVM. For example, use -J-Xmx1024m to set the JVM heap to 1GB.
-dir Working directory of the server that will contain the SQLFire Server status file and will be the default location for log file, persistent files, data dictionary, and so forth (defaults to the current directory) .
-classpath Location of user classes required by the SQLFire Server.

This path is appended to the current classpath.

-max-heap

-initial-heap

Set the maximum heap size and initial heap and for the Java VM, using SQLFire default resource manager settings. If you use the -max-heap and -initial-heap options, by default SQLFire sets the critical-heap-percentage to 80% of the heap, and the eviction-heap-percentage to 80% of the critical heap. SQLFire also sets resource management properties for eviction and garbage collection if they are supported by the JVM.

-server-groups Comma-separated list of server groups to which this member belongs. Used for creating tables in particular sets of servers or for firing data-aware procedures in particular server groups. See CREATE TABLE and CALL. If this option is not specified then the server only belongs to the default server group. The default server group has no name and contains all members of the distributed system.
Note: SQLFire converts the server group names that you specify to use all-uppercase characters.
-run-netserver If true, starts a network server that can service thin clients. See the -client-bind-address and -client-port options to specify where the server should listen. Defaults is true.

If set to false, the -client-bind-address and -client-port options have no effect.

-sync Determines whether the sqlf server command returns immediately if the server reaches a "waiting" state. A locator or server reaches the "waiting" state on startup if the member depends on another server or locator to provide up-to-date disk store persistence files. This type of dependency can occur if the locator or server that you are starting was not the last member to shut down in the distributed system, and another member stores the most recent version of persisted data for the system.

Specifying -sync=false (the default) causes the sqlf command to return control immediately after the member reaches "waiting" state. With -sync=true, the sqlf command does not return control until after all dependent members have booted and the member has finished synchronizing disk stores.

Always use -sync=false (the default) when starting multiple members on the same machine, especially when executing sqlf commands from a shell script or batch file, so that the script file does not hang while waiting for a particular SQLFire member to start. You can use the sqlf locator wait and/or sqlf server wait later in the script to verify that each server has finished synchronizing and has reached the "running" state. For example:
#!/bin/bash

# Start all local SQLFire members to waiting state, regardless of which member holds the most recent
# disk store files:

sqlf locator start -dir=/locator1 -sync=false
sqlf server start -client-port=1528 -locators=localhost[10334] -dir=/server1 -sync=false
sqlf server start -client-port=1529 -locators=localhost[10334] -dir=/server2 -sync=false

# Wait until all members have finished synchronizing and starting:

sqlf locator wait -dir=/locator1
sqlf server wait -dir=/server1
sqlf server wait -dir=/server2

# Continue any additional tasks that require access to the SQLFire members...

[...]

As an alternative to using sqlf server wait, you can monitor the current status of SQLFire members using STATUS column in the MEMBERS system table.

-rebalance Causes the new member to trigger a rebalancing operation for all partitioned tables in the system. The system always tries to satisfy the redundancy of all partitioned tables on new member startup regardless of this option.
-config-scripts Comma-separated list of files containing initial SQL commands to be executed by this member prior to executing any other commands. The format of these files is same as that required by the SQLFire command shell.

The scripts in "-config-scripts" should include configuration and other options, or initial DDL statements to be executed before any other operation in the member, including execution of initial DDLs from other members or those persisted in the default disk store of this member.

-init-scripts As with config-scripts, this is a comma-separated list of files containing the initial SQL commands to be executed in the same format as that required by the SQLFire command shell. Unlike config-scripts, these commands are executed after completing the initial DDL replay from persisted data in this member or from existing members. This brings the meta-data to consistent state with the cluster before executing the script.

In the script file, you can include commands that require the existence of tables and other objects. For example, you might include DML statements that to load initial data into existing tables.

-client-address The address to which this peer binds for receiving peer-to-peer messages. By default sqlf uses the hostname, or localhost if the hostname points to a local loopback address.
-client-bind-address Address to which the network controller binds for client connections. This takes effect only if "-run-netserver" option is not set to false.
-client-port Port that the network controller listens on for client connections, 1-65535 with default of 1527. This takes effect only if "-run-netserver" option is not set to false.
-critical-heap-percentage Sets the Resource Manager's critical heap threshold in percentage of the old generation heap, 0-100. If you set -max-heap and -initial-heap, the default value for critical-heap-percentage is set to 90% of the heap size. Use this switch to override the default.

When this limit is breached, the system starts canceling memory-intensive queries, throws low memory exceptions for new SQL statements, and so forth, to avoid running out of memory.

-eviction-heap-percentage Sets the memory usage percentage threshold (0-100) that the Resource Manager will use to start evicting data from the heap. By default, the eviction threshold is 80% of whatever is set for -critical-heap-percentage. Use this switch to override the default.
-mcast-port Multicast port used to communicate with other members of the distributed system. If zero, multicast is disabled for both member discovery and distribution. Valid values are in the range 0..65535 with default of 10334. This value must be consistent across all the members of cluster.
-mcast-address Multicast address used to discover other members of the distributed system. Only used if mcast-port is non-zero.

The default multicast address is 239.192.81.1 as assigned by IANA (http://www.iana.org/assignments/multicast-addresses).

This must be consistent across all the members of cluster.

-locators List of locators as comma-separated host:port values used to communicate with running locators in the system and thus discover other peers of the distributed system. The list must include all locators in use, and must be configured consistently for every member of the distributed system.

The default is no locators, so the system uses multicast for peer discovery.

Use of locators is recommended for production systems.

-start-locator Specifies an embedded locator in this server that is started and stopped automatically with this server. The locator is specified as <address>[<port>] (note the square brackets), or just <port>. When address is not specified, the one specified in "-bind-address" is used, if set. Otherwise the machine's default address is used.
-host-data If set to false, this peer does not host table data and acts as an accessor member, still having the capability to route queries to appropriate datastores and aggregating the results.
-auth-provider Authentication mechanism for client connections. If -server-auth-provider is not specified, then this same mechanism is also used for joining the cluster. Supported values are BUILTIN and LDAP. Note that SQLFire enables SQL authorization by default if you specify a client authentication mechanism with -auth-provider.

By default, no authentication is used.

-server-auth-provider Authentication mechanism for joining the cluster and talking to other servers and locators in the cluster. Supported values are BUILTIN and LDAP. By default, SQLFire uses the value of -auth-provider if it is specified, otherwise no authentication is used.
-user If the servers or locators have been configured to use authentication, this option specifies the user name to use for booting the server and joining the distributed system.
-password If the servers or locators are configured for authentication, this option specifies the password for the user (specified with the -user option) to use for booting the server and joining the distributed system.

The password value is optional. If you omit the password, sqlf prompts you to enter a password from the console.

-log-file Path of the file to which this member writes log messages (default is sqlfserver.log in the working directory)
-<prop-name> Any other SQLFire boot property such as "log-level". For example, to start a SQLFire server as a JMX Manager, use the boot properties described in Using Java Management Extensions (JMX).

See Configuration Properties for a complete list of boot properties.

Description

You can start servers that can host data (data stores) or those that do not host data (accessors) with the sqlf utility, but either kind of member can service all SQL statements by routing them to appropriate data stores and aggregating the results. Even for a member hosting data for a table, it is not necessary that all data be available in the same member, for example, for DMLs that reference partitioned tables (PARTITION BY Clause). So routing to other stores may be required. In addition it is possible for a member to be a data store but still not host any data for a table due to no common SERVER GROUPS Clause.

Starting a server generates output similar to the following (XXX is the path of current working directory):

 Starting SQLFire Server using multicast for peer discovery: 239.192.81.1[10334]
Starting network server for SQLFire Server at address localhost/127.0.0.1[1527]
SQLFire Server pid: 2015 status: running
Logs generated in <XXX>/sqlfserver.log

This will start the server process locally and use the current working directory to store logs, statistics and the data dictionary, and use multicast for peer discovery (address 239.192.81.1, port 10334). Any persistent tables created using the default disk store will also use this directory to manage the data files.

The data dictionary is managed in a subdirectory called 'datadictionary' and persisted by default. This subdirectory contains the persistent metadata of all the DDLs executed in the distributed system from any clients or peers, and is necessary for a SQLFire server or locator member to start up and function properly.

As the output above indicates, a network server is also started by default that binds to localhost on port 1527. This service allows thin clients to connect to the server and execute SQL commands using the DRDA protocol.

A SQLFire server that you are starting may require other cluster members (locators or servers) to boot before it can confirm that its data is consistent with those members' data. Even with no persistent or overflow tables, each server locally persists a copy of the data dictionary and may remain in a "waiting" state until dependent locators or server come back online to ensure that it has all the latest updates to the data dictionary:

Starting SQLFire Server using locators for peer discovery: localhost[10334]
Starting network server for SQLFire Server at address localhost/127.0.0.1[1528]
Logs generated in /Users/yozie/vmware/sqlf/vFabric_SQLFire_11_b40332/server1/sqlfserver.log
SQLFire Server pid: 9502 status: waiting
Region /_DDL_STMTS_META_REGION has potentially stale data. It is waiting for another member to recover the latest data.
My persistent id:

  DiskStore ID: ff7d62c5-4e03-4c74-975f-c8d3639c1cee
  Name: 
  Location: /10.0.1.31:/Users/yozie/vmware/sqlf/vFabric_SQLFire_11_b40332/server1/./datadictionary

Members with potentially new data:
[
  DiskStore ID: ea249383-b103-43d5-957b-f9789eadd37c
  Name: 
  Location: /10.0.1.31:/Users/yozie/vmware/sqlf/vFabric_SQLFire_11_b40332/server2/./datadictionary
]
Use the "sqlf list-missing-disk-stores" command to see all disk stores that are being waited on by other members. - See log file for details.

Use -sync=false when starting SQLFire members in a shell script or batch file to return control to the script immediately after the member reaches the "waiting" state.

If another server is now started to join the cluster using the same multicast port (default port as for the first server above), the startup message shows the other members in the distributed system:

 sqlf server start -dir=server2
-client-port=1528
Starting SQLFire Server using multicast for
peer discovery: 239.192.81.1[10334]
Starting network server for SQLFire Server at
address localhost/127.0.0.1[1528]
 Distributed system now has 2 members.
 Other members: serv1(2015:data store)<v0>:32013/48225
SQLFire Server pid: 2032 status: running
Logs generated in <XXX>/sqlfserver.log

The italicized lines indicate the output regarding the IDs of other members in the distributed system.

The startup script maintains the running status of the server in a file .sqlfserver.ser.

Example: Multiple Servers Using Default Multicast Port for Peer Discovery

-- start a server using default mcast-port (10334) for discovery,
-- with current directory as the working directory, and network server
–- running on default localhost:1527
sqlf server start
–- start a second server talking to the first with dir2 as working directory;
–- network server is started explicitly on localhost:1528
sqlf server start -dir=dir2 -client-port=1528
–- start another server talking to the first two with dir3 as working
–- directory; network server is disabled explicitly
sqlf server start -dir=dir3 -run-netserver=false
–- check from the SQLFire command shell
sqlf
sqlf> connect client 'localhost:1527';
sqlf> select ID from sys.members;
–- output will show three members in the distributed system

–- stop everything
sqlf> quit;
sqlf server stop -dir=dir3
sqlf server stop -dir=dir2
sqlf server stop

Example: Servers and Client Using Locator

–- start a locator for peer discovery on port 3241
-- listening on all addresses
sqlf locator start -peer-discovery-port=3241
–- start three servers as before using different client ports 
-- and using the above started locator
sqlf server start -dir=dir1 -locators=localhost:3241 
     -client-port=1528
sqlf server start -dir=dir2 -locators=localhost:3241 
     -client-port=1529
sqlf server start -dir=dir3 -locators=localhost:3241 
     -client-port=1530
–- check from the SQLFire command shell
–- connect using the locator's client-port (default 1527) 
-- for load balanced connection to one of the servers 
-- transparently and for reliable failover in case the 
-- server goes down
sqlf
sqlf> connect client 'localhost:1527';
sqlf> select ID, KIND from sys.members;
–- output will show four members with three as data stores 
-- and one as locator

–- stop everything
sqlf> quit;
sqlf server stop -dir=dir3
sqlf server stop -dir=dir2
sqlf server stop -dir=dir1
sqlf locator stop

Example: Multiple Locators for High Availability

–- start two locators that configured to talk 
-- to one another
sqlf locator start -peer-discovery-port=3241 
     -locators=localhost:3242
sqlf locator start -dir=loc2 -peer-discovery-port=3242 
     -locators=localhost:3241 -client-port=1528
–- start four servers that can talk to both 
-- the above locators
sqlf server start -dir=dir1 
     -locators=localhost:3241,localhost:3242 
     -client-port=1529
sqlf server start -dir=dir2 
     -locators=localhost:3241,localhost:3242 
     -client-port=1530
sqlf server start -dir=dir3 
     -locators=localhost:3241,localhost:3242 
     -client-port=1531
sqlf server start -dir=dir4 
     -locators=localhost:3241,localhost:3242 
     -client-port=1532
–- check all the members in the distributed system
sqlf
sqlf> connect client 'localhost:1527';
sqlf> select ID, KIND from sys.members order by KIND DESC;
-– output will show six members with two locators 
-- followed by four data stores
sqlf> quit;

–- now bring down the first locator and check that 
-- new servers can still join
sqlf locator stop
sqlf server start -dir=dir5 
     -locators=localhost:3241,localhost:3242 
     -client-port=1533
–- check all the members in the distributed system again
sqlf
sqlf> connect client 'localhost:1528';
sqlf> select ID, KIND from sys.members order by KIND DESC;
–- output will show six members with one locator 
-- followed by five data stores

–- stop everything
sqlf> quit;
sqlf server stop -dir=dir5
sqlf server stop -dir=dir4
sqlf server stop -dir=dir3
sqlf server stop -dir=dir2
sqlf server stop -dir=dir1
sqlf locator stop -dir=loc2

Example: Servers and Accessors with a Locator

–- start a locator for peer discovery on port 3241
sqlf locator start -peer-discovery-port=3241
–- start three servers using different client ports and 
-- using the above started locator
sqlf server start -dir=dir1 -locators=localhost:3241 -client-port=1528
sqlf server start -dir=dir2 -locators=localhost:3241 -client-port=1529
sqlf server start -dir=dir3 -locators=localhost:3241 -client-port=1530
–- start a couple of peers that will not host data
sqlf server start -dir=dir4 -locators=localhost:3241 -client-port=1531 -host-data=false
sqlf server start -dir=dir5 -locators=localhost:3241 -client-port=1532 -host-data=false
–- check from the SQLFire command shell
–- connect using the locator's client-port (default 1527) 
-- for load balanced connection to one of the servers/accessors 
-- transparently and for reliable failover in case the 
-- server/accessor goes down
sqlf
sqlf> connect client 'localhost:1527';
sqlf> select ID, KIND from sys.members;
–- output will show six members with one as locator, three 
-- data stores and two accessors

Example: Servers and Locator Using BUILTIN Authentication

–- start a locator for peer discovery on port 3241 
-- with authentication; below specifies a builtin 
-- system user on the command-line itself
-- (sqlfire.user.gem1=gem1) but it is recommended 
-- to be in sqlfire.properties in encrypted form 
-- using the "sqlf encrypt-password" tool
sqlf locator start -peer-discovery-port=3241 -auth-provider=BUILTIN  -sqlfire.user.gem1=gem1 -user=gem1 -password=gem1
–- start three servers using different client 
-- ports and using the above started locator
sqlf server start -dir=dir1 -locators=localhost:3241 -client-port=1528 -auth-provider=BUILTIN  -sqlfire.user.gem1=gem1 -user=gem1 -password=gem1
sqlf server start -dir=dir2 -locators=localhost:3241 -client-port=1529 -auth-provider=BUILTIN  -sqlfire.user.gem1=gem1 -user=gem1 -password=gem1
sqlf server start -dir=dir3 -locators=localhost:3241 -client-port=1530 -auth-provider=BUILTIN  -sqlfire.user.gem1=gem1 -user=gem1 -password=gem1
–- check from the SQLFire command shell
–- connect using the locator's client-port 
-- (default 1527) for load balanced
–- connection to one of the servers/accessors 
-- transparently and for reliable
-- failover in case the server/accessor goes down
sqlf
sqlf> connect client 'localhost:1527;user=gem1;password=gem1';
–- add a new database user
sqlf> call sys.create_user('sqlfire.user.gem2', 'gem2');
–- check members
sqlf> select ID, KIND from sys.members;
–- output will show four members with one as 
-- locator and three data stores

Example: Servers and Accessors with Server Groups and Locator

–- start a locator for peer discovery on port 3241
sqlf locator start -peer-discovery-port=3241

–- start three servers using different client 
-- ports and using the above started locator
-- using two server groups (ordersdb and 
-- customers) with one server in both
-- the groups

sqlf server start -dir=dir1 -locators=localhost:3241 -client-port=1528 -server-groups=ordersdb
sqlf server start -dir=dir2 -locators=localhost:3241 -client-port=1529 -server-groups=customers
sqlf server start -dir=dir3 -locators=localhost:3241 -client-port=1530 -server-groups=ordersdb,customers

–- start a couple of peers that will 
-- not host data but in both server 
-- groups; using server groups in 
-- accessors is only useful if 
-- executing data-aware procedures 
-- targeted to those members
sqlf server start -dir=dir4 -locators=localhost:3241 -client-port=1531 -host-data=false -server-groups=ordersdb,customers
sqlf server start -dir=dir5 -locators=localhost:3241 -client-port=1532 -host-data=false -server-groups=ordersdb,customers

–- check from the SQLFire command shell
–- connect using the locator's client-port 
-- (default 1527) for load balanced
–- connection to one of the servers/accessors 
-- transparently and for reliable
-- failover in case the server/accessor 
-- goes down
sqlf
sqlf> connect client 'localhost:1527';
sqlf> select ID, KIND from sys.members;
–- example output
ID                             |KIND
-----------------------------------------------
pc29(23372)<v0>:28185/36245    |locator(normal) 

pc29(23742)<v3>:19653/33509    |datastore(normal)
pc29(23880)<v4>:37719/51114    |accessor(normal)

pc29(23611)<v2>:52031/53106    |datastore(normal)
pc29(24021)<v5>:58510/43678    |accessor(normal)

pc29(23503)<v1>:30307/36105    |datastore(normal)


6 rows selected


-- also check for server groups
sqlf> select ID, KIND, SERVERGROUPS from sys.members;
-- example output
ID                             |KIND              |SERVERGROUPS
-----------------------------------------------------------------
pc29(23372)<v0>:28185/36245    |locator(normal) 
  |
pc29(23742)<v3>:19653/33509    |datastore(normal) |CUSTOMERS,ORDERSDB
pc29(23880)<v4>:37719/51114    |accessor(normal)
  |CUSTOMERS,ORDERSDB
pc29(23611)<v2>:52031/53106    |datastore(normal) |CUSTOMERS
pc29(24021)<v5>:58510/43678    |accessor(normal)
  |CUSTOMERS,ORDERSDB
pc29(23503)<v1>:30307/36105    |datastore(normal) |ORDERSDB


6 rows selected