SQLFire Cache Strategies

You can deploy SQLFire as a distributed cache for data that is managed in a traditional relational database system. SQLFire offers several strategies for using the product as a database cache.

Considerations for Auto-Generated Keys

If cached tables require automatic generation of identity column values, then you should use SQLFire to automatically generate the key values but disable generated keys in the backend database. If the backend database also generates key values, then inserts can fail if SQLFire has already generated the key.

Note: SQLFire supports automatic generation of identity column values, but it does not support generating a sequence of key values. If a sequence is required, then consider using a stored procedure to obtain the next sequence value from the backend database before inserting the row in SQLFire.

Lazy-Loading the Cache

Application query requests are first attempted in SQLFire. On a miss, a loader that implements lazy loading is invoked that can fetch the data from a backend database. SQLFire only supports primary key-based queries when the table is lazily loaded. It is impractical (too expensive) to determine the subset of the rows missing in SQLFire and construct a query to the backend database that only fetches the missing rows. It is also typical for applications to configure least-recently-used (LRU) caching where the cache optimizes the available memory by evicting the least frequently used rows. Applications can only issue primary key-based queries on tables that are configured for eviction. For more information, see Using a RowLoader to Load Existing Data.

All updates can be synchronously or asynchronously written back to the database using a 'writer' or the built in DBSynchronizer. For tables that are identical in structure to the backend database, applications can asynchronously "write behind" if you configure the database URL for the backend database.

Preloading Strategies

SQLFire offers several caching strategies that involve preloading data.
  • Preload database. Load the entire database at bootstrap time into SQLFire. If all updates are going through SQLFire, it is unnecessary to resynchronize the state from the backend database. The data bootstrapping process is outlined below. With this choice, the tables are loaded with the entire data set from the backend database and all queries can directly be executed on SQLFire data stores.
  • Preload "hot" data and lazy-load the rest. If the data set in the backend database is very large, a better option would be to maintain all the "hot" tables in SQLFire and lazy load the historical data into "historical" tables. With this design, the entire supported SQL syntax can be used on fully populated "hot" tables, but, only primary key based queries can be issued on historical tables. Typically, these historical tables will be configured for LRU eviction.

Integration with Hibernate

SQLFire can also be integrated in object relational mapping products such as Hibernate and provide an L2 cache - entire query result sets can be cached in SQLFire. Updates done using the Hibernate API will be synchronously propagated to the database and query result sets will be invalidated in SQLFire.

Exporting and Transforming an Existing Schema

Application developers can use a tool such as DdlUtils to export an existing relational database schema, transform it to SQLFire, and import the new schema along with data into a SQLFire cluster.