Configuring Gateway Queue Concurrency Levels and Order Policy

You can use multiple concurrent gateway queues to parallelize event processing and distribution between sites.

By default, a gateway uses no concurrency, and there is only one queue and one queue processor per gateway. However, in many cases applications will want to take advantage of the ability to process queued events concurrently on the gateway in multi-site installations.

You can define concurrency in Write Behind Cache Listener gateways as well as multi-site gateways.

To set the concurrency level attribute for a gateway, use one of the following mechanisms:

The concurrency level setting creates an additional queue and processor per level. To obtain the maximum throughput, you should increase the concurrency level until your network is saturated. The following diagram illustrates a gateway WAN with multiple concurrent queues configured.

Considerations for Using Concurrent Gateway Queues

When you use concurrent gateway queues, you should consider the following:
  • Queue attributes are repeated for each queue. In other words, each concurrent queue will point to the same disk store, so the same disk directories are used. If persistence is enabled and overflow occurs, then the threads inserting entries into the queues will compete for the disk. This applies to application threads as well as queue processing threads, so it can affect application performance.
  • The maximum queue memory setting applies to each individual queue. If the concurrency level is set to 10 and the maximum queue memory is set to 100MB, then the total maximum queue memory on the gateway is 1000MB (100 * 10).

Using Concurrent Gateway Queues with Gateway Listeners

If a gateway listener is configured, only a single instance of the listener is created. Using the default concurrency level (no concurrency), there will be only one thread invoking the listener methods. When you have multiple concurrent gateway queues defined (concurrency-level is greater than 1), then multiple queue processor threads can invoke the listener methods concurrently. Therefore, listener methods need to be thread-safe.

In addition, using multiple concurrent gateway queues can cause a bottleneck if the listener uses any singleton object like java.sql.Connection since multiple queue processor threads will complete for this resource. In this case, a Connection pool (like javax.sql.DataSource) should be used.

The following diagram illustrates concurrent gateway queues with a listener:

Configuring Gateway Event Ordering Policy

After you have configured the concurrency-level in a gateway, the order-policy gateway attribute defines the ordering of events being distributed by the gateways using multiple queues. The valid order policy values are key, thread and partition. Key ordering means that updates to the same key are sent in order and are added to the same gateway queue. Thread ordering means that order is preserved by initiating thread, and updates by the same thread are sent in order to the same gateway queue. Partion-based ordering can be used when applications have implemented custom partitioning by using the PartitionResolver. Setting the order-policy to "partition" means that all gateway events that share the same "partitioning key" (RoutingObject) are guaranteed to be dispatched in order. The default order-policy for gateway events is key.

An example of an application that would require key order policy would be a single feeder feeding stock updates to other systems. This application consists of one thread and uses many keys. In order to achieve maximum concurrency, the application needs to use key-based event ordering. Otherwise, all updates end up in the same queue without any regard to how the concurrency level is configured. In other words, if updates to different entries have no relationship to each other, then using key-based event ordering is recommended. On the other hand, if one update to one entry affects an update to another entry, then it is recommended to use thread-based ordering.

When using thread-based event ordering, you should have more threads than the concurrency level setting.

To define the order-policy attribute on the gateway node, use one of the following mechanisms:
  • cache.xml configuration:
    <gateway-hub id="LN" port="22221">
      <gateway id="NY" concurrency-level="5" order-policy="key">
        <gateway-endpoint id="NY-1" host="localhost" port="11111"/>
        <gateway-endpoint id="NY-2" host="localhost" port="11112"/>
        <gateway-queue batch-size="1000" enable-persistence="true" disk-store-name="gateway-disk-store" maximum-queue-memory="200"/>
      </gateway>
      <gateway id="TK" concurrency-level="10" order-policy="thread">
        <gateway-endpoint id="NY-1" host="localhost" port="33331"/>
        <gateway-endpoint id="NY-2" host="localhost" port="33332"/>
        <gateway-queue batch-size="1000" enable-persistence="true" disk-store-name="gateway-disk-store" maximum-queue-memory="100"/>
      </gateway>
    </gateway-hub>
  • Java API configuration. Set the order policy using the following Gateway methods:
    gateway.setOrderPolicy(Gateway.OrderPolicy.THREAD);