Cache Plug-ins for External Data Connections

SQLFire is commonly used as a distributed SQL cache in an embedded (peer client) or client-server configuration. It provides a plug-in framework to connect the cache to external data sources such as another relational database.

Several cache plug-ins are supported:
  • Reading from Backend on a Miss ("read through")

    When SQLFire is used as a cache, applications can configure a SQL RowLoader that is triggered to load the data from a backend repository on a miss in SQLFire. When an incoming query request for a uniquely identified row cannot be satisfied by the distributed cache, the loader is invoked to retrieve the data from an external source. SQLFire locks the associated row and prevents concurrent readers trying to fetch the same row from bombarding the backend database.

    Note: When SQLFire is used as a "pure" cache (that is, some or all tables are configured with LRU eviction, and only the actively used subset of data is cached in SQLFire), queries on these tables can be based only on the primary key. Only primary key-based queries invoke a configured loader. Any other query potentially could produce inconsistent results from the cache.

    See Using a RowLoader to Load Existing Data.

  • Synchronous Write Back to Backend ("write through")

    When data is strictly managed in memory, even with replication, a failure of the entire cluster means loss of data. With synchronous write-through, all changes can be synchronously written to the backend repository before the cache is changed. If and only if the write-through succeeds, the data becomes visible in the cache. You configure synchronous write-through with a SQLFire "cache writer". See Handling DML Events Synchronously.

  • Asynchronous Write Back to Backend ("write behind")

    If synchronous writes to the backend are too costly, the application can configure the use of a "write behind cache listener". SQLFire supports several options for how the events are queued, batched, and written to the database of record. It is designed for very high reliability and handles many failure conditions. Persistent queues ensure writes even when the backend database is temporarily unavailable. You can order event delivery or batch it based on a time interval. You can also conflate a certain number of rows and continuous updates to a single update, to reduce the load on the backend database of an 'update heavy' system. See Handling DML Events Asynchronously.