Note: Documentation for Pivotal GemFire 7.0.x is now available at http://docs.gopivotal.com/gemfire/index.html. Please reference the docs.gopivotal.com site for the latest and most up-to-date documentation on GemFire 7.0.x. The vFabric GemFire 7.0 documentation site will no longer be updated.

Configuring Dispatcher Threads and Order Policy for Event Distribution

You can use multiple dispatcher threads to process region events simultaneously in a serial gateway sender queue for distribution between sites, or in a serial asynchronous event queue for distributing events for write-behind caching. You can also configure the ordering policy for dispatching those events.

Note: Dispatcher threads and ordering policy cannot be configured for parallel gateway sender queues or parallel asynchronous event queues.
By default, a serial gateway sender queue or asynchronous event queue uses only one dispatcher per queue. However, in some cases an application has the ability to process queued events concurrently for distribution to another GemFire site or listener. In these cases, you can configure multiple dispatcher threads for a single, serial queue.

Diagram of Queue with Multiple Dispatcher Threads

When you configure multiple dispatcher threads for a serial queue, GemFire creates an additional queue for each thread on each member that hosts the queue. To obtain the maximum throughput, you should increase the number of dispatcher threads until your network is saturated.

The following diagram illustrates a serial gateway sender queue that is configured with multiple dispatcher threads.

Performance and Memory Considerations

When you configure a serial gateway sender or an asynchronous event queue with multiple dispatcher threads, consider the following:
  • Queue attributes are repeated for each queue that is created for a dispatcher thread. That is, each concurrent queue points to the same disk store, so the same disk directories are used. If persistence is enabled and overflow occurs, the threads that insert entries into the queues compete for the disk. This applies to application threads and dispatcher threads, so it can affect application performance.
  • The maximum-queue-memory setting applies to each individual queue. If you configure 10 dispatcher threads and the maximum queue memory is set to 100MB, the total maximum queue memory for the queue is 1000MBon each member that hosts the queue.

Values for Queue Ordering Policy

If you configure multiple dispatcher-threads for a queue, you can also configure the order-policy that those threads use to distribute events from the queue. The valid order policy values are:
  • key (default). All updates to the same key are distributed in order. GemFire preserves key ordering by placing all updates to the same key in the same dispatcher thread queue. You typically use key ordering when updates to entries have no relationship to each other, such as for an application that uses a single feeder to distribute stock updates to several other systems.
  • thread. All region updates from a given thread are distributed in order. GemFire preserves thread ordering by placing all region updates from the same thread into the same dispatcher thread queue. In general, use thread ordering when updates to one region entry affect updates to another region entry.
  • partition. All region events that share the same partitioning key are distributed in order. Specify partition ordering when applications use a PartitionResolver to implement custom partitioning. With partition ordering, all entries that share the same "partitioning key" (RoutingObject) are placed into the same dispatcher thread queue.

Examples: Configuring Dispatcher Threads and Ordering Policy for a Serial Gateway Sender Queue

To set the number of dispatcher threads and ordering policy for a serial gateway sender, use one of the following mechanisms.
  • cache.xml configuration
    <cache>
      <gateway-sender id="NY" parallel="false" 
       remote-distributed-system-id="1"
       is-persistent="true"
       disk-store-name="gateway-disk-store"
       maximum-queue-memory="200"
       dispatcher-threads=5 order-policy="key"/> 
       ... 
    </cache>
  • Java API configuration
    Cache cache = new CacheFactory().create();
    
    GatewaySenderFactory gateway = cache.createGatewaySenderFactory();
    gateway.setParallel(false);
    gateway.setPersistenceEnabled(true);
    gateway.setDiskStoreName("gateway-disk-store");
    gateway.setMaximumQueueMemory(200);
    gateway.setDispatcherThreads(5);
    gateway.setOrderPolicy(OrderPolicy.KEY);
    GatewaySender sender = gateway.create("NY", "1");
    sender.start();
    
The following examples show how to set dispatcher threads and ordering policy for an asynchronous event queue:
  • cache.xml configuration
    <cache>
       <async-event-queue id="sampleQueue" persistent="true"
        disk-store-name="async-disk-store" parallel="false"
        dispatcher-threads=5 order-policy="key">
          <async-event-listener>
             <class-name>MyAsyncEventListener</class-name>
             <parameter name="url"> 
               <string>jdbc:db2:SAMPLE</string> 
             </parameter> 
             <parameter name="username"> 
               <string>gfeadmin</string> 
             </parameter> 
             <parameter name="password"> 
               <string>admin1</string> 
             </parameter> 
          </async-event-listener>
        </async-event-queue>
    ...
    </cache>
  • Java API configuration
    Cache cache = new CacheFactory().create();
    AsyncEventQueueFactory factory = cache.createAsyncEventQueueFactory();
    factory.setPersistent(true);
    factory.setDiskStoreName("async-disk-store");
    factory.setParallel(false);
    factory.setDispatcherThreads(5);
    factory.setOrderPolicy(OrderPolicy.KEY);
    AsyncEventListener listener = new MyAsyncEventListener();
    AsyncEventQueue sampleQueue = factory.create("customerWB", listener);

    Entry updates in the current, in-process batch are not eligible for conflation.