This site hosts historical documentation. Visit www.terracotta.org for recent product information.

BigMemory Go FAQ

The BigMemory Go Technical FAQ answers frequently asked questions on how to use BigMemory Go, integration with other products, and solving issues. Other resources for resolving issues include:

  • Release Notes – Lists features and issues for specific versions of BigMemory Go and other Terracotta products.
  • Compatibility Information – Includes tables on compatible versions of BigMemory Go and other Terracotta products, JVMs, and application servers.
  • Terracotta Forums – If your question doesn't appear below, consider posting it on the Ehcache Forum.

Getting Started

What if I need enterprise support for BigMemory Go?

Terracotta provides enterprise support for BigMemory Go as part of software subscription. To get enterprise support, contact Terracotta.

What's the difference between BigMemory Go and BigMemory Max?

BigMemory Go is for in-memory data management on a single JVM (in-process). BigMemory Max is for distributed in-memory management across an array of servers. For more on Go vs. Max, see BigMemory Overview.

Configuration

Where is the source code?

BigMemory Go is not an open-source product. See the Ehcache website for an open-source caching project.

Can you use more than one instance of BigMemory Go in a single JVM?

Yes. Create a CacheManager using new CacheManager(...) and keep hold of the reference. The singleton approach, accessible with the getInstance(...) method, is still available too. However, hundreds of caches can be supported with one CacheManager, so use separate CacheManagers where different configurations are needed. The Hibernate Provider has also been updated to support this behavior.

What elements are mandatory in ehcache.xml?

See the file ehcache.xsd in the BigMemory Go kit for the latest information on required configuration elements.

How is auto-versioning of elements handled?

Automatic element versioning works only with memory-store caches only. BigMemory Go does not use auto-versioning.

To enable auto-versioning, set the system property net.sf.ehcache.element.version.auto to true (it is false by default). Manual (user provided) versioning of cache elements is ignored when auto-versioning is in effect. Note that if this property is turned on for one of the ineligible caches, auto-versioning will silently fail.

How do I get a memory-only store to persist to disk between JVM restarts?

BigMemory Go offers fast, robust disk persistence set through configuration.

There are two patterns available: write-through and write-behind caching. In write-through caching, writes to the cache cause writes to an underlying resource. The cache acts as a facade to the underlying resource. With this pattern, it often makes sense to read through the cache too. Write-behind caching uses the same client API; however, the write happens asynchronously.

While file systems or a web-service clients can underlie the facade of a write-through cache, the most common underlying resource is a database.

Can I use BigMemory Go as a memory store only?

Yes. Just set the persistence strategy (in the <cache> configuration element) to "none":

<cache>
  ...
  <persistence strategy="none"/>
  ...
</cache>

Can I use BigMemory Go as a disk store only?

No. However, you can minimize the usage of memory using sizing configuration.

Is it thread-safe to modify element values after retrieval from a store?

Remember that a value in an element is globally accessible from multiple threads. It is inherently not thread-safe to modify the value. It is safer to retrieve a value, delete the element and then reinsert the value.

The UpdatingCacheEntryFactory does work by modifying the contents of values in place in the cache. This is outside of the core of BigMemory Go and is targeted at high performance CacheEntryFactories for SelfPopulatingCaches.

Can non-serializable objects be stored?

Non-serializable object can be stored only in the BigMemory Go memory store (heap). If an attempt is made to overflow a non-serializable element to the BigMemory Go off-heap or disk stores, the element is removed and a warning is logged.

What is the difference between TTL, TTI, and eternal?

These three configuration attributes can be used to design effective data lifetimes. Their assigned values should be tested and tuned to help optimize performance. timeToIdleSeconds (TTI) is the maximum number of seconds that an element can exist in the store without being accessed, while timeToLiveSeconds (TTL) is the maximum number of seconds that an element can exist in the store whether or not is has been accessed. If the eternal flag is set, elements are allowed to exist in the store eternally and none are evicted. The eternal setting overrides any TTI or TTL settings.

These attributes are set in the configuration file per cache. To set them per element, you must do so programmatically.

If null values are stored in the cache, how can my code tell the difference between "intentional" nulls and non-existent entries?

Your application is querying the database excessively only to find that there is no result. Since there is no result, there is nothing to cache. To prevent the query from being executed unnecessarily, cache a null value, signalling that a particular key doesn't exist.

In code, checking for intentional nulls versus non-existent cache entries may look like:

// cache an explicit null value:

cache.put(new Element("key", null));

Element element = cache.get("key");

if (element == null) {

  // nothing in the cache for "key" (or expired) ...

} else {

  // there is a valid element in the cache, however getObjectValue() may be null:

  Object value = element.getObjectValue();

  if (value == null) {

    // a null value is in the cache ...

  } else {

    // a non-null value is in the cache ...

  }

}

The cache configuration in ehcache.xml may look similar to the following:

<cache
  name="some.cache.name"
  maxEntriesLocalHeap="10000"
  eternal="false"
  timeToIdleSeconds="300"
  timeToLiveSeconds="600"
/>

Use a finite timeToLiveSeconds setting to force an occasional update.

How many threads does BigMemory Go use, and how much memory does that consume?

The amount of memory consumed per thread is determined by the Stack Size. This is set using -Xss.

What happens when maxEntriesLocalHeap is reached? Are the oldest items expired when new ones come in?

When the maximum number of elements in memory is reached, the Least Recently Used (LRU) element is removed. "Used" in this case means inserted with a put or accessed with a get. The LRU element is flushed asynchronously to the off-heap store.

Why is there an expiry thread for the disk store but not for the other stores?

Because the in-memory data is allowed a fixed maximum number of elements or bytes, it will have a maximum memory use equal to the number of elements multiplied by the average size. When an element is added beyond the maximum size, the LRU element gets flushed to the disk store. Running an expiry thread in memory turns out to be a very expensive operation and potentially contentious. It is far more efficient to only check expiry when need rather than explicitly search for it. The tradeoff is higher average memory use.

The disk-store expiry thread keeps the disk clean. There is hopefully less contention for the disk store's locks because commonly used values are in memory. If you are concerned about CPU utilization and locking in the disk store, you can set the diskExpiryThreadIntervalSeconds to a high number, such as 1 day. Or, you can effectively turn it off by setting the diskExpiryThreadIntervalSeconds to a very large value.

What eviction strategies are supported?

LRU, LFU and FIFO eviction strategies are supported.

How does element equality work in serialization mode?

An element (key and value) in BigMemory is guaranteed to .equals() another as it moves between stores.

Can you use BigMemory Go as a second-level cache in Hibernate and BigMemory Go outside of Hibernate at the same time?

Yes. You use one instance of BigMemory Go with one ehcache.xml. You configure your caches with Hibernate names for use by Hibernate. You can have other caches which you interact with directly, outside of Hibernate.

Operations

How do you get an element without affecting statistics?

Use the Cache.getQuiet() method. It returns an element without updating statistics.

Is there a simple way to disable BigMemory Go when testing?

Set the system property net.sf.ehcache.disabled=true to disable BigMemory Go. This can easily be done using -Dnet.sf.ehcache.disabled=true on the command line. If BigMemory Go is disabled, no elements will be added to the stores.

How do I dynamically change cache attributes at runtime?

This is not possible. However, you can achieve the same result as follows:

  1. Create a new cache:

    Cache cache = new Cache("test2", 1, true, true, 0, 0, true, 120, ...);
    cacheManager.addCache(cache);
    

    See the BigMemory API documentation for the full parameters.

  2. Get a list of keys using cache.getKeys, then get each element and put it in the new cache.

    None of this will use much memory because the new cache elements have values that reference the same data as the original cache.

  3. Use cacheManager.removeCache("oldcachename") to remove the original cache.

Do you need to explicitly shut down the CacheManager when you finish with BigMemory Go?

There is a shutdown hook which calls the shutdown on JVM exit. If the JVM keeps running after you stop using BigMemory Go, you should call CacheManager.getInstance().shutdown() so that the threads are stopped and cache memory is released back to the JVM.

Can you use BigMemory Go after a CacheManager.shutdown()?

When you call CacheManager.shutdown() is sets the singleton in CacheManager to null. Using a cache after this generates a CacheException.

However, if you call CacheManager.create() to instantiate a new CacheManager, then you can still use BigMemory Go. Internally the CacheManager singleton gets set to the new one, allowing you to create and shut down any number of times.

Why are statistics counters showing 0 for active caches?

Statistics gathering is disabled by default in order to optimize performance. You can enable statistics gathering in caches in one of the following ways:

  • In cache configuration by adding statistics="true" to the <cache> element.
  • Programmatically when setting a cache's configuration.
  • In the
  • In the Terracotta Management Console.

To function, certain features in the Terracotta Management Console require statistics to be enabled.

How do I detect deadlocks in BigMemory Go?

BigMemory Go does not experience deadlocks. However, deadlocks in your application code can be detected with certain tools, such as the JDK tool JConsole.

Troubleshooting

I have created a new cache and its status is STATUS_UNINITIALISED. How do I initialise it?

You need to add a newly created cache to a CacheManager before it gets initialised. Use code like the following:

CacheManager manager = CacheManager.create();
Cache myCache = new Cache("testDiskOnly", 0, true, false, 5, 2);
manager.addCache(myCache);

Why did a crashed standalone BigMemory node not come up with all data intact?

Persistence was not configured or not configured correctly on the node.

I added data Client 1, but I can't see it on Client 2. Why not?

BigMemory Go does not distribute data. See BigMemory Max.

I have a small data set, and yet latency seems to be high.

There are a few ways to try to solve this, in order of preference:

  1. Try pinning the cache. If the data set fits comfortably in heap and is not expected to grow, this will speed up gets by a noticeable factor. Pinning certain elements and/or tuning ARC settings might also be effective for certain use cases.
  2. Increase the size of the off-heap store to allow data sets that cannot fit in heap—but can fit in memory—to remain very close to your application.

I am using Java 6 and getting a java.lang.VerifyError on the Backport Concurrent classes. Why?

The backport-concurrent library is used in BigMemory Go to provide java.util.concurrency facilities for Java 4 - Java 6. Use either the Java 4 version which is compatible with Java 4-6, or use the version for your JDK.

I get a javax.servlet.ServletException: Could not initialise servlet filter when using SimplePageCachingFilter. Why?

If you use this default implementation, the cache name is called "SimplePageCachingFilter". You need to define a cache with that name in ehcache.xml. If you override CachingFilter, you are required to set your own cache name.

Why is there a warning in my application's log that a new CacheManager is using a resource already in use by another CacheManager?

WARN  CacheManager ...  Creating a new instance of CacheManager using the diskStorePath
"C:\temp\tempcache" which is already used by an existing CacheManager.

This means that, for some reason, your application is trying to create one or more additional instances of CacheManager with the same configuration. Depending upon your persistence strategy, BigMemory Go will automatically resolve the disk-path conflict, or it will let you know that you must explicitly configure the diskStorePath.

To eliminate the warning:

  • Use a separate configuration per instance.
  • If you only want one instance, use the singleton creation methods, i.e., CacheManager.getInstance(). In Hibernate, there is a special provider for this called net.sf.ehcache.hibernate.SingletonEhCacheProvider. See Hibernate.

What does the following error mean? "Caches cannot be added by name when default cache config is not specified in the config. Please add a default cache config in the configuration."

The defaultCache is optional. When you try to programmatically add a cache by name, CacheManager.add(String name), a default cache is expected to exist in the CacheManager configuration. To fix this error, add a defaultCache to the CacheManager's configuration.

Do I have to restart BigMemory Go after redeploying in a container?

Errors could occur if BigMemory Go runs with a web application that has been redeployed, causing BigMemory Go to not start properly or at all. If the web application is redeployed, be sure to restart BigMemory Go.