Compacting Disk Store Log Files

You can configure automatic compaction for an operation log based on percentage of garbage content. You can also request compaction manually for online and offline disk stores.

How Compaction Works

When a DML operation is added to a disk store, any preexisting operation record for the same record becomes obsolete, and SQLFire marks it as garbage. For example, when you update a record, the update operation is added to the store. If you delete the record later, the delete operation is added and the update operation becomes garbage. SQLFire does not remove garbage records as it goes, but it tracks the percentage of garbage in each operation log, and provides mechanisms for removing garbage to compact your log files.

SQLFire compacts an old operation log by copying all non-garbage records into the current log and discarding the old files. As with logging, oplogs are rolled as needed during compaction to stay within the MAXLOGSIZE setting.

You can configure the system to automatically compact any closed operation log when its garbage content reaches a certain percentage. You can also manually request compaction for online and offline disk stores. For the online disk store, the current operation log is not available for compaction, no matter how much garbage it contains.

Online Compaction Diagram



Offline compaction runs essentially in the same way, but without the incoming DML operations. Also, because there is no current open log, the compaction creates a new one to get started.

Run Online Compaction

Old log files become eligible for online compaction when their garbage content surpasses a configured percentage of the total file. A record is garbage when its operation is superseded by a more recent operation for the same record. During compaction, the non-garbage records are added to the current log along with new DML operations. Online compaction does not block current system operations.
  • Run automatic compaction. When AUTOCOMPACT is true, SQLFire automatically compacts each oplog when its garbage content surpasses the COMPACTIONTHRESHOLD. Automatic compaction takes cycles from your other operations, so you may want to disable it and only do manual compaction, to control the timing.
  • Run manual compaction. To run manual compaction:
    • Set the disk store attribute ALLOWFORCECOMPACTION to true. This causes SQLFire to maintain extra data about the files so that it can compact on demand. This is disabled by default to save space. You can run manual online compaction at any time while the system is running. Oplogs eligible for compaction based on the COMPACTIONTHRESHOLD are compacted into the current oplog.
    • Run manual compaction as needed. You can compact all online disk stores in a distributed system from the command-line. For example:
      sqlf compact-all-disk-stores
      Note: This sqlf command requires a local sqlfire.properties file that contains properties to locate the distributed system. Or, specify the multicast port or locator properties to connect to the cluster (for example, -mcast-port=port_number).

Run Offline Compaction

Offline compaction is a manual process. All log files are compacted as much as possible, regardless of how much garbage they hold. Offline compaction creates new log files for the compacted log records.

Use this syntax to compact individual offline disk stores:
sqlf compact-disk-store myDiskStoreName /firstDir /secondDir -maxOplogSize=maxMegabytesForOplog

You must provide all of the directories in the disk store. If no oplog max size is specified, SQLFire uses the system default.

Offline compaction can take a lot of memory. If you get a java.lang.OutOfMemory error while running this, you made need to increase your heap size. See the sqlf command help for instructions on how to do this.

Performance Benefits of Manual Compaction

You can improve performance during busy times if you disable automatic compaction and run your own manual compaction during lighter system load or during downtimes. You could run the API call after your application performs a large set of data operations. You could run sqlf compact-all-disk-stores every night when system use is very low.

To follow a strategy like this, you need to set aside enough disk space to accommodate all non-compacted disk data. You might need to increase system monitoring to make sure you do not overrun your disk space. You may be able to run only offline compaction. If so, you can set ALLOWFORCECOMPACTION to false and avoid storing the information required for manual online compaction.

Directory Size Limits

If you reach the disk directory size limits during compaction:
  • For automatic compaction, the system logs a warning, but does not stop.
  • For manual compaction, the operation stops and returns a DiskAccessException to the calling process, reporting that the system has run out of disk space.

Example Compaction Run

In this example offline compaction run listing, the disk store compaction had nothing to do in the *_3.* files, so they were left alone. The *_4.* files had garbage records, so the oplog from them was compacted into the new *_5.* files.
bash-2.05$ ls -ltra backupDirectory
total 28
-rw-rw-r--   1 jpearson users          3 Apr  7 14:56 BACKUPds1_3.drf
-rw-rw-r--   1 jpearson users         25 Apr  7 14:56 BACKUPds1_3.crf
drwxrwxr-x   3 jpearson users       1024 Apr  7 15:02 ..
-rw-rw-r--   1 jpearson users       7085 Apr  7 15:06 BACKUPds1.if
-rw-rw-r--   1 jpearson users         18 Apr  7 15:07 BACKUPds1_4.drf
-rw-rw-r--   1 jpearson users       1070 Apr  7 15:07 BACKUPds1_4.crf
drwxrwxr-x   2 jpearson users        512 Apr  7 15:07 .

bash-2.05$ sqlf validate-disk-store ds1 backupDirectory
/root: entryCount=6
/partitioned_region entryCount=1 bucketCount=10
Disk store contains 12 compactable records.
Total number of region entries in this disk store is: 7

bash-2.05$ sqlf compact-disk-store ds1 backupDirectory
Offline compaction removed 12 records.
Total number of region entries in this disk store is: 7

bash-2.05$ ls -ltra backupDirectory
total 16
-rw-rw-r--   1 jpearson users          3 Apr  7 14:56 BACKUPds1_3.drf
-rw-rw-r--   1 jpearson users         25 Apr  7 14:56 BACKUPds1_3.crf
drwxrwxr-x   3 jpearson users       1024 Apr  7 15:02 ..
-rw-rw-r--   1 jpearson users          0 Apr  7 15:08 BACKUPds1_5.drf
-rw-rw-r--   1 jpearson users        638 Apr  7 15:08 BACKUPds1_5.crf
-rw-rw-r--   1 jpearson users       2788 Apr  7 15:08 BACKUPds1.if
drwxrwxr-x   2 jpearson users        512 Apr  7 15:09 .
bash-2.05$