Column Definition

The column definition defines the name of a column and its data-type.

Syntax

column-name data-type
    [ column-constraint ]*
    [ [ WITH ] DEFAULT { constant-expression | NULL } 
      | [ GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY
          [ ( START WITH integer-constant [, INCREMENT BY integer-constant ] ) ] ] ]
    [ column-constraint ]*

The set of supported data-types are detailed in Data Types. Identity columns must be of type BIGINT or INTEGER.

Identity Columns

SQLFire supports both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns only for BIGINT and INTEGER data types. The START WITH and INCREMENT BY clauses are supported only for GENERATED BY DEFAULT identity columns. Creating an identity column does not create an index on the column.

For a GENERATED ALWAYS identity column, SQLFire increments the default value on every insertion, and stores the incremented value in the column. You cannot insert a value directly into a GENERATED ALWAYS identity column, and you cannot update a value in a GENERATED ALWAYS identity column. Instead, you must either specify the DEFAULT keyword when inserting data into the table or you must leave the identity column out of the insertion column list.

Consider a table with the following column definition:

create table greetings (i int generated always as identity, ch char(50));

You can insert rows into the table using either the DEFAULT keyword, or by omitting the identity column from the INSERT statement:

insert into greetings values (DEFAULT, 'hello');
insert into greetings(ch) values ('hi');

The values that SQLFire automatically generates for a GENERATED ALWAYS identity column are unique.

For a GENERATED BY DEFAULT identity column, SQLFire increments and uses a default value for an INSERT only when no explicit value is given. To use the generated default value, either specify the DEFAULT keyword when inserting into the identity column, or leave the identity column out of the INSERT column list.

In contrast to GENERATED ALWAYS identity columns, with a GENERATED BY DEFAULT column you can specify an identity value to use instead of the generated default value. To specify a value, include it in the INSERT statement.

For example, consider a table created using the statement:
create table greetings (i int generated by default as identity, ch char(50)); 
The following statement specifies the value "1" for the identity column:
insert into greetings values (1, 'hi'); 
These statements both use generated default values:
insert into greetings values (DEFAULT, 'hello');
insert into greetings(ch) values ('bye');

Although the automatically-generated values in a GENERATED BY DEFAULT identity column are unique, a GENERATED BY DEFAULT column does not guarantee unique identity values for all rows in the table. For example, in the above statements the rows containing "hi" and "hello" both have an identity value of "1." This occurs because the generated column starts at "1" and the user-specified value was also "1."

To avoid duplicating identity values (for example, during an import operation), you can use the START WITH clause to specify the first identity value that SQLFire should assign and increment. Or, you can use a primary key or a unique constraint on the GENERATED BY DEFAULT identity column to check for and disallow duplicates.

By default, the initial value of a GENERATED BY DEFAULT identity column is 1, and the value is incremented by 1 for each INSERT. Use the optional START WITH clause to specify a new initial value. Use the optional INCREMENT BY clause to change the increment value used during each INSERT.

See also Auto-Generated Columns for information about limitations with identity columns.

CONSTRAINT Clause

A CONSTRAINT clause is an optional part of a CREATE TABLE or ALTER TABLE statement that defines a rule to which table data must conform.

There are two types of constraint:
  • Column-level constraints refer to a single column in the table and do not specify a column name (except check constraints). They refer to the column that they follow.
  • Table-level constraints refer to one or more columns in the table. Table-level constraints specify the names of the columns to which they apply. Table-level CHECK constraints can refer to 0 or more columns in the table.
Column and table constraints include:
  • NOT NULL— Specifies that a column cannot hold NULL values (constraints of this type are not nameable).
  • PRIMARY KEY— Specifies a column (or multiple columns if specified in a table constraint) that uniquely identifies a row in the table. The identified columns must be defined as NOT NULL.
  • UNIQUE— Specifies that values in the column must be unique. NULL values are not allowed.
  • FOREIGN KEY— Specifies that the values in the columns must correspond to values in referenced primary key or unique columns or that they are NULL.
    Note: If the foreign key consists of multiple columns and any column is NULL, then the whole key is considered NULL. SQLFire permits the insert no matter what is on the non-null columns.
    Note: SQLFire only supports ON DELETE RESTRICT for foreign key references, and can be optionally specified that way. SQLFire checks dependent tables for foreign key constraints. If any row in a dependent table violates a foreign key constraint, the transaction is rolled back and an exception is thrown. SQLFire does not support cascade deletion.
  • CHECK— Specifies rules for values in a column, or specifies a wide range of rules for values when included as a table constraint. The CHECK constraint has the same format and restrictions for column and table constraints.

Column constraints and table constraints have the same function; the difference is in where you specify them. Table constraints allow you to specify more than one column in a PRIMARY KEY, UNIQUE, CHECK, or FOREIGN KEY constraint definition. Column-level constraints (except for check constraints) refer to only one column.

If you do not specify a name for a column or table constraint, then SQLFire generates a unique name.

Column Constraint

A column constraint applies a rule to a single column in the table. You include a column constraint as part of the column definition.

{ 
    NOT NULL | 
    [ [ CONSTRAINT constraint-name ] 
    { 
        CHECK ( search-condition ) | 
        { 
            PRIMARY KEY | 
            UNIQUE | 
            REFERENCES table-name [ ( column-name [ , column-name ]* ) ] 
                [ ON DELETE RESTRICT ] 
        } 
    } 
} 

Table Constraint

This constraint allows you to specify table-level constraints, including the primary key, a unique key, or a foreign key. Alternatively, a check constraint can be specified that is tested for each INSERT, UPDATE, and DELETE operation.

[ CONSTRAINT constraint-name ] 
{ 
    CHECK ( search-condition ) | 
    { 
        PRIMARY KEY ( column-name [ , column-name ]* ) | 
        UNIQUE ( column-name [ , column-name ]* ) | 
        FOREIGN KEY ( column-name [ , column-name ]* ) 
            REFERENCES table-name [ ( column-name [ , column-name ]* ) ] 
            [ ON DELETE RESTRICT ] 
    } 
}

CHECK Constraint Search Condition

A search condition is a boolean expression that is specified in a CHECK constraint. The expression cannot be dynamic and so cannot contain dynamic parameters, date/time functions, user functions, or sub-queries.

If a check constraint is included as part of a column-definition, a column reference can only be made to the same column.

If a check constraint is specified as part of a table definition, column references can specify any columns that were previously defined in the CREATE TABLE statement. When specified in a table constraint, the search condition must be satisfied for all rows in the table. The search condition is applied to each row that is modified on an INSERT or UPDATE at the time of the row modification. The entire statement is aborted if any check constraint is violated.

Examples of Constraints

-- column-level primary key constraint named OUT_TRAY_PK: 
CREATE TABLE SAMP.OUT_TRAY 
( 
    SENT TIMESTAMP, 
    DESTINATION CHAR(8), 
    SUBJECT CHAR(64) NOT NULL CONSTRAINT OUT_TRAY_PK PRIMARY KEY, 
    NOTE_TEXT VARCHAR(3000) 
 ); 

-- the table-level primary key definition allows you to 
-- include more than one columns in the primary key definition: 

CREATE TABLE SAMP.SCHED 
 ( 
    CLASS_CODE CHAR(7) NOT NULL, 
    DAY SMALLINT NOT NULL, 
    STARTING TIME, 
    ENDING TIME, 
    PRIMARY KEY (CLASS_CODE, DAY) 
 );


-- Use a column-level constraint for an arithmetic check 
-- Use a table-level constraint 
-- to make sure that a employee's taxes does not 
-- exceed the bonus 

CREATE TABLE SAMP.EMP   
 ( 
    EMPNO CHAR(6) NOT NULL CONSTRAINT EMP_PK PRIMARY KEY, 
    FIRSTNME CHAR(12) NOT NULL, 
    MIDINIT vARCHAR(12) NOT NULL, 
    LASTNAME VARCHAR(15) NOT NULL, 
    SALARY DECIMAL(9,2) CONSTRAINT SAL_CK CHECK (SALARY >= 
      10000),    BONUS DECIMAL(9,2), 
    TAX DECIMAL(9,2), 
    CONSTRAINT BONUS_CK CHECK (BONUS > TAX) 
  );

-- use a check constraint to allow only appropriate 
-- abbreviations for the meals 

CREATE TABLE FLIGHTS 
 ( 
    FLIGHT_ID CHAR(6) NOT NULL , 
    SEGMENT_NUMBER INTEGER NOT NULL , 
    ORIG_AIRPORT CHAR(3), 
    DEPART_TIME TIME, 
    DEST_AIRPORT CHAR(3), 
    ARRIVE_TIME TIME, 
    MEAL CHAR(1) CONSTRAINT MEAL_CONSTRAINT 
    CHECK (MEAL IN ('B', 'L', 'D', 'S')), 
    PRIMARY KEY (FLIGHT_ID, SEGMENT_NUMBER) 
 );

-- create a table with a table-level primary key constraint 

CREATE TABLE CITIES
( 
   CITY_ID INTEGER NOT NULL CONSTRAINT CITIES_PK Primary Key ,
   CITY_NAME VARCHAR(24) NOT NULL, 
   COUNTRY VARCHAR(26) NOT NULL, 
   AIRPORT VARCHAR(3), 
   LANGUAGE VARCHAR(16) 
 ) 

-- and a table-level foreign key constraint
CREATE TABLE METROPOLITAN 
( 
  HOTEL_ID INT NOT NULL CONSTRAINT HOTELS_PK PRIMARY KEY, 
  HOTEL_NAME VARCHAR(40) NOT NULL, 
  CITY_ID INT CONSTRAINT METRO_FK REFERENCES CITIES 
)

-- add a unique constraint to a column

CREATE TABLE METROPOLITAN 
( 
  HOTEL_ID INT NOT NULL CONSTRAINT HOTELS_PK, 
  HOTEL_NAME VARCHAR(40) NOT NULL, 
  CITY_ID INT ,
  constraint ID_Unique unique (HOTEL_ID) 
) 

-- create a table whose city_id column references the 
-- primary key in the Cities table 
-- using a column-level foreign key constraint 

CREATE TABLE CONDOS 
( 
  CONDO_ID INT NOT NULL CONSTRAINT hotels_PK PRIMARY KEY, 
  CONDO_NAME VARCHAR(40) NOT NULL, 
  CITY_ID INT CONSTRAINT city_fk REFERENCES Cities ON DELETE RESTRICT 
)

PARTITION BY Clause

The SQLFire partitioning clause controls the locality and distribution of data in the given server groups. This is important for optimizing queries, and it is essential for cross-table joins. The clause can be one of column partitioning, range partitioning, list partitioning, or generic expression partitioning.

Syntax

{
    {
      PARTITION BY PRIMARY KEY | Partitioning by Column ( column-name [ , column-name ]* ) }
      |
      Partitioning by a Range of Values ( column-name )
      (
          VALUES BETWEEN value AND value
              [ , VALUES BETWEEN value AND value ]*
      )
      |
      Partitioning by a List of Values ( column-name )
      (
          VALUES ( value [ , value ]* )
              [ , VALUES ( value [ , value ]* ) ]*
      )
      |
      Partitioning by Expression
    }
  [ Colocating Related Rows ( table-name [ , table_name ] *  ) ]
}
[ REDUNDANCY Clause integer-constant ]
[ BUCKETS Clause integer-constant ]
[ RECOVERYDELAY Clause integer-constant ]
[ MAXPARTSIZE Clause integer-constant ]

Description

If no partitioning clause is specified, then the table is replicated on all the members of the server groups of the table. REPLICATE can also be explicitly used to create a replicated table.

Partitioning by Column

Use the PARTITION BY COLUMN clause to provide a set of column names that will determine the partitioning. As a shortcut you can use PARTITION BY PRIMARY KEY to refer to the primary key columns. The hash function is an internal function that typically returns the EXCLUSIVE OR of the hashcode() return values for the columns' underlying Java types.

-- partition by primary key 
CREATE TABLE HOTELAVAILABILITY 
( HOTEL_ID INT NOT NULL, 
  BOOKING_DATE DATE NOT NULL, 
  ROOMS_TAKEN INT DEFAULT 0,
  PRIMARY KEY (HOTEL_ID, BOOKING_DATE) 
) PARTITION BY PRIMARY KEY

-- partition by column (booking date only) 
CREATE TABLE HOTELAVAILABILITY 
( HOTEL_ID INT NOT NULL, 
  BOOKING_DATE DATE NOT NULL, 
  ROOMS_TAKEN INT DEFAULT 0, 
  PRIMARY KEY (HOTEL_ID, BOOKING_DATE) 
) PARTITION BY COLUMN (BOOKING_DATE)

Partitioning by a Range of Values

Use the PARTITION BY RANGE clause to specify the ranges of a field that should be colocated. Doing so ensures the locality of data in case of range queries and for cross table joins. The lower limit of the range is inclusive and the upper limit is exclusive. It is not necessary for the ranges to cover the whole spectrum of the possible values for the field. The ranges not covered are automatically partitioned on the available servers by SQLFire with no guarantee of locality for those values.

-- partition by range 
CREATE TABLE HOTELAVAILABILITY 
( HOTEL_ID INT NOT NULL, 
  BOOKING_DATE DATE NOT NULL, 
  ROOMS_TAKEN INT DEFAULT 0,
  PRIMARY KEY (HOTEL_ID, BOOKING_DATE)
) PARTITION BY RANGE (BOOKING_DATE) 
( VALUES BETWEEN '2010-01-01' AND '2010-04-01', 
  VALUES BETWEEN '2010-04-01' AND '2010-07-01', 
  VALUES BETWEEN '2010-07-01' AND '2010-10-01', 
  VALUES BETWEEN '2010-10-01' AND '2011-01-01'
)

Partitioning by a List of Values

Use the PARTITION BY LIST clause to specify the set of values of a field that should be colocated to help optimization of queries and for cross table joins. It is not necessary to list all the possible values for the field, and the values that are not covered are partitioned automatically.

-- partition by list 
CREATE TABLE FLIGHTS 
( 
  FLIGHT_ID CHAR(6) NOT NULL , 
  SEGMENT_NUMBER INTEGER NOT NULL , 
  ORIG_AIRPORT CHAR(3), 
  PRIMARY KEY (FLIGHT_ID, SEGMENT_NUMBER) 
) PARTITION BY LIST (ORIG_AIRPORT) 
( VALUES ('PDX', 'LAX'), 
  VALUES ('AMS', 'DUB'), 
  VALUES ('DTW', 'ORL')
) 

Partitioning by Expression

The PARTITION BY clause with an expression is a type of hash-like partitioning where the value to hash on is evaluated from the expression. The expression must reference only field names from the table. This allows rows to be colocated based on some function of their values.

Note: When you colocate tables that are partitioned by expression, you must ensure that the expressions resolve to the same values in both tables, so that data can be colocated. SQLFire ensures that the parent table uses an expression for partitioning, but it performs no other compatibility checks.
-- partition by expression 
-- This example will colocate all the rows that are booked 
-- in the same month together 
CREATE TABLE HOTELAVAILABILITY 
(
  HOTEL_ID INT NOT NULL, 
  BOOKING_DATE DATE NOT NULL, 
  ROOMS_TAKEN INT DEFAULT 0, 
  PRIMARY KEY (HOTEL_ID, 
  BOOKING_DATE)
) PARTITION BY (MONTH(BOOKING_DATE)) 

Colocating Related Rows

The COLOCATE WITH clause specifies the table with which a new partitioned table must be colocated. The referenced table must already exist.

When two tables are partitioned on columns and colocated, it forces partitions with the same values for those columns in the two tables to be located on the same SQLFire member. For example, in case of range or list partitioning, the rows that satisfy a particular range or list are colocated on the same member for all of the colocated tables.

When you colocate tables that are partitioned by expression, SQLFire only ensures that the parent table is also partitioned by an expression. No additional compatibility checks are performed. You must ensure that the expressions resolve to the same values in both tables, so that data can be colocated.

The COLOCATE WITH clause is only used as part of a partitioning clause. If no PARTITION BY clause is used when there is a COLOCATE WITH clause, SQLFire throws a SQLException. The partitioning clause that you specify for the new table must be compatible with the clause used to create the referenced table.

For partitioned tables, any SERVER GROUPS clause that you specify must be the same as the clause used to create the referenced table.

The number of BUCKETS for partitioned tables must also be the same, or SQLFire throws an exception.

If you do not specify a COLOCATE WITH clause, SQLFire attempts to perform default colocation by using foreign key references. Default colocation is performed only if you omit the BUCKETS clause entirely, or if you specify a BUCKETS value that matches the BUCKETS value of the referenced tables.

CREATE TABLE COUNTRIES 
( 
  COUNTRY VARCHAR(26) NOT NULL CONSTRAINT COUNTRIES_UNQ_NM Unique, 
  COUNTRY_ISO_CODE CHAR(2) NOT NULL CONSTRAINT COUNTRIES_PK PRIMARY KEY, 
  REGION VARCHAR(26), 
  CONSTRAINT COUNTRIES_UC
    CHECK (country_ISO_code = upper(country_ISO_code) ) 
) PARTITION BY PRIMARY KEY

-- Explicitly colocate the CITIES table with the COUNTRIES table. 
-- Both of these tables will be hash partitioned on the 
-- COUNTRY_ISO_CODE 
-- field. This guarantees that cities are colocated with 
-- the country they are in. 
CREATE TABLE CITIES 
( 
  CITY_ID INTEGER NOT NULL CONSTRAINT CITIES_PK Primary key,
  CITY_NAME VARCHAR(24) NOT NULL, 
  COUNTRY VARCHAR(26) NOT NULL, 
  AIRPORT VARCHAR(3), 
  LANGUAGE VARCHAR(16), 
  COUNTRY_ISO_CODE CHAR(2) CONSTRAINT COUNTRIES_FK 
  REFERENCES COUNTRIES (COUNTRY_ISO_CODE) 
) PARTITION BY COLUMN (COUNTRY_ISO_CODE) 
  COLOCATE WITH (COUNTRIES)

REDUNDANCY Clause

Use the REDUNDANCY clause to specify the number of redundant copies that should be maintained for each partition, to ensure that the partitioned table is highly available even if members fail. Because SQLFire is primarily a memory-based data management system, it is important to configure this carefully to enable failover in case of failures. A value too large adversely affects performance, network usage, and memory usage. Typically a value of 1 is recommended; this setting maintains one copy in addition to the primary copy of the table data.

Note: Colocated partitioned tables must have the same redundancy.

In this example it is assumed that you have configured SQLFire to create partitioned tables (rather than replicated tables) by default.

CREATE TABLE COUNTRIES
(
  COUNTRY VARCHAR(26) NOT NULL,
  COUNTRY_ISO_CODE CHAR(2) NOT PRIMARY KEY,
  REGION VARCHAR(26),
)
REDUNDANCY 1

BUCKETS Clause

The optional BUCKETS attribute specifies the fixed number of "buckets," the smallest unit of data containment for the table that can be moved around. Data in a single bucket resides and moves together. If not specified, the number of buckets defaults to 113.

Note: Colocated tables must all have the same number of buckets. If you specify a mismatched BUCKETS value when using the COLOCATE WITH clause, SQLFire throws an exception. If you specify a mismatched BUCKETS value without using the COLOCATE WITH clause, SQLFire does not perform default location for the new table. If you specify the COLOCATE WITH clause but do not specify a BUCKETS value, SQLFire inherits the BUCKETS value from the colocated table.

For range or list partitioning, each range or list is mapped to one bucket for the table, so total number of buckets cannot be less than the number of ranger or lists. If total number of buckets is larger than the total number of ranges or lists, then the values that fall outside of the specified ranges or lists are distributed among all the available buckets including those that are not mapped to any of the ranges or lists.

RECOVERYDELAY Clause

Use the RECOVERYDELAY clause to specify the default time in milliseconds that existing members will wait before satisfying redundancy after a member crashes. The default is -1, which indicates that redundancy is not recovered after a member fails. By default a SQLFire system always attempts to satisfy redundancy if required when a new server member that is hosting data for the table is started.

MAXPARTSIZE Clause

The MAXPARTSIZE attribute specifies the maximum memory for any partition on a member in megabytes. Use it to load-balance partitions among available members.

-- example showing how to specify the BUCKETS, RECOVERYDELAY
-- and MAXPARTSIZE
CREATE TABLE COUNTRIES 
( 
  COUNTRY VARCHAR(26) NOT NULL, 
  COUNTRY_ISO_CODE CHAR(2) NOT PRIMARY KEY, 
  REGION VARCHAR(26), 
) PARTITION BY PRIMARY KEY BUCKETS 10 RECOVERYDELAY 10 MAXPARTSIZE 50

SERVER GROUPS Clause

The servers on which the table will be created is either specified by the SERVER GROUPS clause or deduced from the colocated parent table's server groups, if a colocation clause is specified. If the server groups are not specified, the default is to create the table on the default server group of the schema.

Syntax

[ SERVER GROUPS ( server_group_name [, server_group_name ]* ) ]

Description

The default group is specified in the DEFAULT SERVER GROUP directive of the CREATE SCHEMA statement. If no server group is specified for the schema, the implicit default server group is used, which includes all servers in the distributed system.

The server group names that you specify are converted to all-uppercase characters.

Note: In order for two partitioned tables to be colocated, the SERVER GROUPS clauses in both CREATE TABLE statements must be identical.

Replicated tables must specify the same server groups, or one table's server groups must be a subset of the other table's server groups.

You can configure both partitioned tables and replicated tables to use a gateway sender. However, you must create replicated tables in the same server groups as the gateway sender(s) that you specify in the CREATE TABLE statement. See Syntax.

Example with SERVER GROUPS Clause

CREATE TABLE COUNTRIES
(
  COUNTRY VARCHAR(26) NOT NULL,
  COUNTRY_ISO_CODE CHAR(2) NOT PRIMARY KEY,
  REGION VARCHAR(26),
) SERVER GROUPS (SG1, SG2)

GATEWAYSENDER Clause

The GATEWAYSENDER keyword associates the new table with one or more named gateway senders, so that DML operations against the table are replicated to the remote SQLFire cluster associated with the sender(s).

Note: You can optionally use the SQLFire ALTER TABLE command to add or remove gateway senders after you have created a table.

Syntax

[ GATEWAYSENDER ( sender-name [,sender-name ] * ) ]

Example with GATEWAYSENDER Clause

The following example associates a table with the gateway sender, TEST_SENDER.

CREATE TABLE TESTTABLE
(
  ID INT NOT NULL, 
  DESCRIPTION VARCHAR(1024), 
  ADDRESS VARCHAR(1024)
)
GATEWAYSENDER(TEST_SENDER)

The following example associates a table with two gateway senders in the system.

CREATE TABLE TESTTABLE
(
  ID INT NOT NULL, 
  DESCRIPTION VARCHAR(1024), 
  ADDRESS VARCHAR(1024) 
)
GATEWAYSENDER(TEST_SENDER1, TEST_SENDER2)

ASYNCEVENTLISTENER Clause

The ASYNCEVENTLISTENER keyword associates a table with one or more named AsyncEventListener implementations.

Syntax

[ ASYNCEVENTLISTENER (async-listener-id [,async-listener-id] * ) ]

Description

See Implementing an AsyncEventListener.

Note: The listener configuration that you specify does not have to be available at the time you create the table, so SQLFire does not display an error message if the specified listener name does not exist. Make sure that you use the same listener name with both the CREATE ASYNCEVENTLISTENER command and the CREATE TABLE command.
Note: You can optionally use the SQLFire ALTER TABLE command to add or remove asynceventlistener configurations after you have created a table.

Example with ASYNCLISTENER Keyword

The following example associates a new table with two AsyncEventListener implementations.

CREATE TABLE TESTTABLE
(
   ID INT NOT NULL,
   DESCRIPTION VARCHAR(1024),
   ADDRESS VARCHAR(1024)
)
ASYNCEVENTLISTENER(TEST_LISTENER1, TEST_LISTENER2)

EVICTION BY Clause

Use the EVICTION BY clause to evict rows automatically from an in-memory table based on different criteria. You can use this clause to create an overflow table where evicted rows are written to a disk store, or you can simply destroy the evicted rows to control memory usage.

Syntax

[ EVICTION BY 
   {
      LRUMEMSIZE integer-constant
   |
      LRUHEAPPERCENT
   |
      LRUCOUNT integer-constant 
   } 
[ EVICTACTION { OVERFLOW | DESTROY } ] ]
    

Description

Note: If you configure a table with the DESTROY eviction action, you must ensure that all queries against the table filter results using a primary key value. Queries that do not filter on a primary key may yield partial results if rows are destroyed on eviction. This limitation does not apply to tables that are configured with the OVERFLOW eviction action; query completeness is assured using both in-memory and overflow table data as necessary.
Note: SQLFire does not propagate the DESTROY evict action to configured callback implementations, such as DBSynchronizer. Do not configure eviction with the DESTROY action on a table that has dependent tables (for example, child rows with foreign keys). If a DELETE statement is called for a parent table row that was locally destroyed through eviction, the DELETE succeeds in SQLFire. However, the DELETE operation can later fail in the backend database when DBSynchronizer asynchronously sends the DELETE command, if dependent rows still exist.

If eviction with the DESTROY action is required for dependent tables, consider using a trigger or a synchronous Writer plug-in to respond to DELETE events on the parent table. The trigger or writer should fail the DELETE operation if child rows are found to exist in the backend database.

See also Limitations of Eviction.

All eviction is performed using a least-recently-used (LRU) algorithm, but you can specify whether SQLFire performs eviction based on memory size, heap percentage, or LRU count.

Note: The integer specified in LRUMEMSIZE is the memory size in megabytes.

You must enable eviction for a table when the table is created; you cannot enable eviction at a later time using ALTER TABLE, although you can use ALTER TABLE to change the LRUMEMSIZE value.

Eviction is performed locally by each member in the server groups that host the table. After the configured memory size, heap size, or LRU count is reached, inserts to the table succeed after a corresponding eviction of least-recently-used rows.

If you chose eviction by heap percentage (LRUHEAPPERCENT), individual SQLFire members begin evicting data when their heap usage reaches a configured percent. You can configure a global heap percentage for all SQLFire data stores, or configure different heap percentages for one or more server groups. Heap Eviction Configuration Procedures describes how to configure the heap percentage.

The eviction action specified in the EVICTACTION clause determines whether the least-recently-used data should be destroyed (DESTROY) or whether it should be overflowed to disk (OVERFLOW). Overflow tables use the disk-store-name (and ASYNCHRONOUS or SYNCHRONOUS settings) for writing to disk if they are specified in the CREATE TABLE statement. If no disk store is specified, overflow tables use the default disk store for evicted rows.

You can optionally persist an overflow table to disk by using the PERSISTENT clause.

EXPIRE Clause

Both tables and table entries can be configured to expire using the EXPIRE clause.

Syntax

[ EXPIRE { TABLE | ENTRY } WITH { IDLETIME seconds | TIMETOLIVE seconds} ACTION DESTROY ]*

Description

Note: EXPIRE ENTRY WITH IDLETIME works only when a primary key based query is fired. Otherwise the system will not modify its accessed time when table scans or index scans happen and it gets destroyed.

For example, if you specify EXPIRE TABLE WITH TIMETOLIVE secs ACTION DESTROY, then SQLFire destroys the entire table if there have been no writes to the table for the specified number of seconds. Similarly for EXPIRE ENTRY WITH IDLETIME secs ACTION DESTROY, SQLFire destroys a table entry if the entry has been not updated for the configured number seconds.

PERSISTENT Clause

When you specify the PERSISTENT keyword, SQLFire persists the table to disk. The PERSISTENT keyword is generally used with an existing 'disk-store-name' to use for persisting the data. If you do not specify a named disk store, SQLFire persists the table in the default disk store.

Syntax

[ PERSISTENT [ 'disk-store-name' ] [ ASYNCHRONOUS | SYNCHRONOUS ] ]

Description

When you use the PERSISTENT keyword, you can also use the ASYNCHRONOUS or SYNCHRONOUS keyword to specify how SQLFire should write table data to the disk store.

CAUTION:
When tables are replicated or partitioned with redundancy, SQLFire persists the data on disk for each copy independently. All disk storage is owned by each member and there is nothing shared at the storage level. The advantage is higher availability and there are no restrictions on the disk storage layer. But when multiple copies are stored on disk, SQLFire has to ensure the data consistency for each replica at all times. This includes the case when members go down in a arbitrary order and come back up again in a arbitrary order. To ensure consistency, SQLFire requires each persistent member to come up before the data member becomes available.

See CREATE DISKSTORE.

See Using Disk Stores to Persist Data.

The SYNCHRONOUS and ASYNCHRONOUS attributes specify whether the data has to be written synchronously or asynchronously to the disk store. For ASYNCHRONOUS writes, SQLFire uses additional attributes in the associated disk store to configure the write behavior. See CREATE DISKSTORE.

Example with PERSISTENT Clause

CREATE TABLE FLIGHTS
    (
    FLIGHT_ID CHAR(6) NOT NULL,
    SEGMENT_NUMBER INTEGER NOT NULL,
    ORIG_AIRPORT CHAR(3),
    DEPART_TIME TIME,
    DEST_AIRPORT CHAR(3),
    ARRIVE_TIME TIME,
    MEAL CHAR(1) CONSTRAINT MEAL_CONSTRAINT
    CHECK (MEAL IN ('B', 'L', 'D', 'S')),
    PRIMARY KEY (FLIGHT_ID, SEGMENT_NUMBER)
    )
    PERSISTENT ASYNCHRONOUS