This site hosts historical documentation. Visit www.terracotta.org for recent product information.
WAN replication allows data to remain in sync across clusters connected by a WAN link. For example, geographically remote data centers can use WAN replication to maintain consistent views of data.
Terracotta Distributed Ehcache WAN Replication integrates with your application simply through Ehcache configuration. While your application continues to use Terracotta Distributed Ehcache as before, caches marked for WAN replication are automatically synchronized across the WAN link. Other advantages include:
These configuration-based components are designed to get WAN integration up and running quickly.
The following cache operations are replicated:
The following components are required:
Ehcache WAN-replication JAR
Obtain this JAR from your Terracotta representative.
Terracotta Distributed Ehcache allows per-cache WAN replication using clustered caches and a message broker that supports JMS (Java Messaging Service). Ehcache-based applications can use Terracotta Distributed Ehcache WAN Replication to sync caches if:
Only consistent caches can participate in Terracotta Distributed Ehcache WAN Replication.
To set up Terracotta Distributed Ehcache WAN Replication, follow these steps:
Ensure that the Terracotta clusters that will use WAN replication can run as expected without WAN replication.
See the installation instructions for Terracotta Distributed Enterprise Ehcache for more information on installing a cluster.
Install and configure a supported message broker for each Terracotta cluster.
The broker can be run on any of the nodes in the cluster, or on its own host.
Install the WAN-replication JAR in the following path:
UNIX/LINUX
${TERRACOTTA_HOME}/ehcache/ehcache-wanreplication-<version>.jar
MICROSOFT WINDOWS
%TERRACOTTA_HOME%\ehcache\ehcache-wanreplication-<version>.jar
Configure CacheManagers that will participate in WAN replication.
CacheManagers managing at least one cache that will be replicated across the WAN must be configured. See Configuring the CacheManager.
Configure caches that will be replicated across the WAN.
See Configuring the Caches.
Ensure that a reliable WAN link exists between all target clusters.
On each cluster, start the message broker and the Terracotta server before starting the clients.
Your clusters should now be able to replicate caches across the WAN link.
An application integrated with Ehcache has at least one CacheManager configured. If any of the caches managed by a CacheManager should be replicated over the WAN, that CacheManager must be able to participate in WAN replication.
You configure a CacheManager to participate in WAN replication by adding the element <cacheManagerPeerProviderFactory>
along with a set of properties that specify connection and message-delivery details. The following is an example of a <cacheManagerPeerProviderFactory>
block using ActiveMQ as message broker and configuring a queue-based architecture:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.wan.jms.JMSCacheManagerPeerProviderFactory"
properties="initialContextFactoryName=org.apache.activemq.jndi.ActiveMQInitialContextFactory,
destinationType=queue,
noOfReadDestinations=1,
noOfWriteDestinations=1,
writeProviderURL=nio://localhost:61616,
readProviderURL=nio://10.0.4.201:61616,
writeDestinationBindingName=dynamicQueues/queue2,
readDestinationBindingName=dynamicQueues/queue2,
writeDestinationConnectionFactoryBindingName=ConnectionFactory,
readDestinationConnectionFactoryBindingName=ConnectionFactory,
acknowledgementMode=CLIENT_ACKNOWLEDGE,
persistent=true,
pooledConnectionFactoryProvider=net.sf.ehcache.distribution.wan
.jms.AmqPooledConnectionFactoryProvider,
poolMaxConnections=10,
poolIdleTimeout=5000,
consumerThreads=1,
resourceCaching=CONSUMER,
conflictResolver=net.sf.ehcache.distribution.wan.jms.TimeBasedConflictResolver"
propertySeparator=","/>
For topic-based architecture, certain properties must be set as shown:
...
destinationType=topic,
writeDestinationBindingName=dynamicTopics/topic1,
readDestinationBindingName=dynamicTopics/topic2,
...
<cacheManagerPeerProviderFactory>
has two attributes:
The following table defines the available properties:
Property | Definition |
---|---|
initialContextFactoryName | A class, typically provided by the application, that initializes the context to a given message broker. The value in this case is a factory class that extends org.apache.activemq.jndi.ActiveMQInitialContextFactory. |
destinationType | TDetermines whether a queue or a topic architecture is used. |
noOfReadDestinations | The number of content sources that will be read from. These sources are other clusters with similar support for JMS. |
noOfWriteDestinations | The number of content targets that will be written to. These are the queues or topics that the local cluster writes to. |
writeProviderURL | URL where the write queueing service is available (used in loading the initial context). The value localhost reflects the fact that the writes are on the same host. |
readProviderURL | URL for the source where the read queueing service is available (used in loading the initial context). |
writeDestinationBindingName | The JNDI binding name for the write queue or topic. |
readDestinationBindingName | The JNDI binding name for the read queue or topic. |
writeDestinationConnectionFactoryBindingName | The JNDI binding names for the write QueueConnectionFactory or the TopicConnectionFactory. |
readDestinationConnectionFactoryBindingName | The JNDI binding names for the read QueueConnectionFactory or the TopicConnectionFactory. |
acknowledgementMode | Sets how the messaging system is informed that a message has been received. CLIENT_ACKNOWLEDGE is the value used where guaranteed message delivery is required. |
persistent | Boolean value that sets whether messages should be persisted during interruptions or disruptions to service. Use true where guaranteed message delivery is required. |
pooledConnectionFactoryProvider | The class that provides the connection pool. |
poolMaxConnections | The maximum number of connections the connection pool can make available. Improves performance at the cost of memory and other overhead. |
poolIdleTimeout | The maximum number of milliseconds the application waits for a connection before timing out. |
consumerThreads | The number of consumer threads in the messaging service. |
resourceCaching | The location of cached resources. Resources are cached to improve performance. The default value is CONSUMER . |
conflictResolver | The class defining the method of resolution for conflicts between existing copies of the same data (a put cache Element). Time and version-based methods relying on Ehcache Element metadata are provided. A custom resolver can be used. |
propertySeparator | The designated delimiter for properties in the properties attribute. |
Note that for properties whose values are classnames, the fully qualified name of the class is required.
A clustered cache is configured for clustering with Terracotta by adding a <terracotta>
element to that cache's <cache>
block in the Terracotta Distributed Ehcache configuration file. For example, the following cache is configured for clustering:
<cache name="Foo"
eternal="false" timeToIdleSeconds="3600" timeToLiveSeconds="0"
memoryStoreEvictionPolicy="LFU">
<!-- Adding the element <terracotta /> turns on Terracotta clustering for the cache Foo. -->
<terracotta />
</cache>
Clustered caches can be configured for WAN replication by adding a <cacheEventListenerFactory>
subelement to the cache's <cache>
block:
<cache name="Foo" eternal="false" timeToIdleSeconds="3600"
timeToLiveSeconds="0" memoryStoreEvictionPolicy="LFU">
<terracotta />
<!-- If <terracotta /> exists, adding <cacheEventListenerFactory> turns on
WAN replication for Foo. -->
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.wan.jms.JMSCacheReplicatorFactory"
properties="replicateAsynchronously=true,
replicatePuts=true,
replicateUpdates=true,
replicateUpdatesViaCopy=true,
replicateRemovals=true,
conflationEnabled=true,
asynchronousReplicationIntervalMillis=30000"
propertySeparator=","/>
</cache>
<cacheEventListenerFactory>
has two attributes:
The following table defines the available properties:
Property | Definition |
---|---|
replicateAsynchronously | Asynchronous replication ("true") delivers events to a message buffer, from which delivery to the message broker takes place. Synchronous replication ("false") causes application threads to deliver events to the message broker directly. Asynchronous replication is the recommended mode because it yields better performance, as application threads do not block while delivering messages to the message broker. |
replicatePuts | Set to "true" if puts to the cache should be replicated instead of ignored ("false"). |
replicateUpdates | Set to "true" if an overridden (replaced) cache element should cause invalidation (resulting in updates) of remote elements having the same key, or should be ignored ("false") by remote caches. |
replicateUpdatesViaCopy | Set to "true" if updates to the cache should be copied to remote caches instead of removed from those caches ("false"). |
replicateRemovals | Set to "true" if explicit element removals (not expirations) should be replicated instead of ignored ("false"). |
asynchronousReplicationIntervalMillis | The poll interval to check if a message transmission is necessary. In effect only if replicateAsynchronously=true. |
propertySeparator | The character used to delimit the properties. |
The following message brokers are supported:
Contact your Terracotta representative for more information on using WebSphereMQ. Other message brokers should be tested thoroughly before production use.
Download and install one instance of Apache ActiveMQ. Only one instance of ActiveMQ is required per cluster. However, you must make the ActiveMQ classes available to every Terracotta client on each Terracotta cluster that participates in the WAN replication. To do this, copy the following JAR files from the ActiveMQ distribution to a location on the classpath of each client application (or to WEB-INF/lib
if using a WAR file):
To provide it with enough memory, increase the heap setting for ActiveMQ (-Xmx256M -Xms256M by default) to -Xmx1024M -Xms1024M.
To learn how to configure Ehcache for ActiveMQ, see the example in the cache configuration section.
To run ActiveMQ, issue the following command from the ActiveMQ home directory:
UNIX/LINUX
[PROMPT] nohup bin/activemq > /tmp/smlog 2>&1
MICROSOFT WINDOWS
[PROMPT] bin\activemq
In Microsoft Windows, you can also run ActiveMQ as a Windows service (using a Java service wrapper). See Apache ActiveMQ for more information.
Terracotta Distributed Ehcache WAN Replication is a flexible solution supporting a number of complex topologies, including:
To learn more about using Terracotta Distributed Ehcache WAN Replication in your architecture, contact your Terracotta representative.
For a discussion on simple WAN replication setups, see Strategies For Setting Up WAN Replication.