This site hosts historical documentation. Visit www.terracotta.org for recent product information.
The Terracotta Management Console (TMC) is a web-based administration and monitoring application for Terracotta products. TMC connections are managed through the Terracotta Management Server (TMS), which must be running for the TMC to function.
To confirm the version of the TMC you are running, and for other information about the TMC, click About on the toolbar.
When you first connect to the TMC, the authentication setup page appears, where you can choose to run the TMC with authentication or without. Authentication can also be enabled/disabled in the TMC Settings panel.
If you do not enable authentication, you can connect to the TMC without being prompted for a login or password.
If you enable authentication, the following choices appear:
Instructions for setting up connections to LDAP and Active Directory are available with the form that appears when you select the LDAP or Active Directory. Setting up authorization and authentication controls access to the TMC but does not affect connections, which must be secured separately. In addition, an appropriate Terracotta license file is needed to run the TMC with security.
Authentication using built-in role-based accounts backed by a .ini file is the simplest scheme. When you choose .ini-file authentication, you must restart the TMC using the stop-tmc and start-tmc scripts. A setup page appears for initializing the two accounts that control access to the TMC:
Create a password for each account, then click Done to go to the login screen. The login screen appears each time a connection is made to the TMC.
The Terracotta Management Console allows a connected user to remained connected indefinitely, whether or not that user is active. To set a default timeout for inactivity, navigate to the WEB_INF directory, open the web.xml
file, and uncomment the following block. You can then accept its default value of 30 for idleTimeoutMinutes
or specify a different value:
<context-param> <description> After this amount of time has passed without user activity, the user will be automatically logged out. </description> <param-name>idleTimeoutMinutes</param-name> <param-value>30</param-value> </context-param>
Note that internal to the TSA and TMC, the Apache Shiro session management is configured with an inactivity timeout of 10 minutes, expressed in milliseconds, securityManager.sessionManager.globalSessionTimeout = 600000
. However, this timeout setting is unrelated to the human end-user activity.
For more information about Apache Shiro, see Shiro session management.
A view of the TMC is shown below. Display panels and the connection-groups drop-down menu appear if an active (connected) connection group is available and selected.
When you initially log on to the TMC, only default connection groups with default connections exist. If a node that can be monitored is running on localhost at the port specified by one of the default connections, that default connection appears as an active connection. Other default connections appear as unavailable (inactive) connections.
To create and edit connections and connection groups, use the Connections panel. To open the Connections panel, on the toolbar, click Settings or + New Connection on the toolbar. Connections are assigned to connection groups to simplify management tasks.
Connections allow you to monitor and administer nodes (both clustered and standalone). Connections from the TMS to agents are made using a location URI in the following form:
<scheme>:<host-address>:<port>
URIs showing "http:" are for non-secure connections.
If the URI is for a server in a Terracotta Server Array, all other nodes participating in the cluster are automatically found. It is not required to create separate connections for those other nodes. A typical URI for a server is similar to:
http://myServer:9030
where an IP address or resolvable hostname is followed by the tsa-group-port (9530 by default), which is used as the management port. This port is configured in tc-config.xml
.
A typical URI for a Terracotta client or BigMemory Go will appear similar to:
http://myHost:9888
where an IP address or resolvable hostname is followed by the agent's management port (9888 by default), which has been set in the node's configuration file. For BigMemory Go, for example, use the managementRESTService
element in ehcache.xml
.
To add a new connection:
A screen appears confirming the agent found at the given location. If no agent is found, a warning appears and no connection can be set up. The location is relative to the machine running the Terracotta Management Server (TMS). The default location, "localhost", is the machine the TMS is running on, and might not be the machine your browser is running on.
The connection timeout ensures that the TMC does not hang waiting for a connection to an unreachable node.
The read timeout ensures that the TMC does not hang waiting for a connection to an unresponsive node.
Managed connections that appear in the connections list can be edited or deleted.
To delete an existing standalone connection, click Settings on the toolbar to view the Connections panel. Locate the connection under its connection group in the Configured Connections list and click the red X next to that connection's name.
To delete an existing cluster connection, click Settings on the toolbar to view the Connections panel. Locate the connection group in the Configured Connections list and click Delete next to that group's name.
To edit a standalone connection:
You can choose a group for the connection from the menu of existing groups, or create a new connection group. If you create a new group, enter a name for the group.
The connection timeout ensures that the TMC does not hang waiting for a connection to an unreachable node.
The read timeout ensures that the TMC does not hang waiting for a connection to an unresponsive node.
To edit a cluster connection, click Edit for the cluster group, then edit the group name and connection URL. Click Save Changes to save the new values or Cancel to revert to the original values.
For every configured connection group, you can display a mini dashboard to view group status.
Each TSA connection-group dashboard displays the number of connected active (green) and mirror (blue) servers. It also displays the number of clients connected to that TSA. Certain other server states might also be indicated on the dashboard, including server starting or recovering (yellow) and server unreachable (red).
Each standalone connection group dashboard displays its number of configured connections and the number currently connected.
Each dashboard has a control drop-down menu with commands applicable to that dashboard and its associated connection group. For example, to hide a connection group's dashboard, choose Hide This Connection from the group's dashboard control menu. The connection group's connections are unaffected by hiding the dashboard. To restore the dashboard to the connections, click Settings from the toolbar, then enable Show in Dashboard checkbox for that group.
To manage the application data of nodes in a connection group, select the group, then click the Application Data tab. Each Application Data panel has a CacheManager and Scope menu to select which CacheManagers and nodes supply the data for that panel.
The Overview panel displays health metrics for CacheManagers and their caches, including certain cache statistics to help you track performance and resource usage across all CacheManagers.
Real-time statistics are displayed in a table with the following columns:
To choose the types of statistics displayed in the table, click Configure Columns to open a list of available statistics. Choose statistics (or set the option to display all statistics), then click OK to accept the change. The table immediately begins to display the chosen statistics.
To sort the table by a specific statistic, click the column head for that statistic.
The Charts panel graphs the same statistics available in the Overview panel. This is useful for tracking performance trends and discovering potential issues.
In addition to being able to select a CacheManager and scope for the displayed data, you can also select a specific cache (or all caches) for the selected CacheManager.
Each graph plots the appropriate metrics along the Y axis against system time (X axis). To view the value along a single point on a graph, float the mouse pointer over that point. This also displays the units used for the statistic being graphed.
To choose the type of statistic graphed by a particular chart, click the chart's corresponding Configure link to open a list of available statistics. Choose a statistic, then click OK to accept the change. The chart immediately begins to graph the chosen statistic.
The Sizing panel provides information on the usage of the heap, off-heap, and disk tiers by the caches of the selected CacheManager. To view tier usage by any active CacheManager, select that CacheManager from the CacheManager drop-down menu.
The Relative Cache Sizes by Tier table displays usage of the tier selected from the Tier drop-down menu. The table has the following columns:
Click a row in the table to set the cache-related tier graphs to display values for the named cache.
The panel shows the following bar graphs:
To display an exact usage value, float the mouse pointer over a bar. To display values for that tier in the Relative Cache Sizes by Tier table, click a tier's bar. The selected tier's bar is lighter in color than the other bars.
The Selected Cache drop-down menu determines which cache is shown in the cache-related tier graphs and highlighted in the Relative Cache Sizes by Tier. The menu also indicates if the cache uses size-based (automatic resource control, that is, ARC) or entry-based sizing.
The Management panel displays a table listing information about the selected CacheManager by node (where the CacheManager exists) or by its caches. Choose the CacheManagers radio button to show a table with a node list, or the Caches radio button to show a table with a cache list. These tables (and any sublist tables) can be sorted and ordered by any column by clicking the column head.
Global cache disable/enable controls at at the top of the panel.
The cache list is a table of caches under the selected cache manager.
The table has the following columns:
If a cache listing is expanded using the arrow to the left of the cache name, a sublist appears with a table of all of the nodes that contain the cache. The table has the following columns:
The CacheManager list is a table of nodes under the selected cache manager.
The table has the following columns:
If a node listing is expanded using the arrow to the left of the connection name, a sublist appears with a table of all of the nodes that contain the cache:
The Content panel allows you to issue BigMemory SQL queries against your caches. For more information about BigMemory SQL, click the Query link to see help, or go to BigMemory SQL Queries.
The Monitoring tab is available only for cluster connection groups. To monitor the functioning of the cluster, as well as the functioning of individual cluster components, use the features available under this tab.
The Runtime statistics graphs provide a continuous feed of server and client metrics. Sampling begins automatically when a runtime statistic panel is first viewed, but historical data is not saved.
Use the Select View menu to set the runtime statistics view to one of the following:
Specific runtime statistics are defined in the following sections. The cluster components for which the statistic is available are indicated in the text.
Shows the total number of live objects in the cluster, mirror group, server, or clients.
If the trend for the total number of live objects goes up continuously, clients in the cluster will eventually run out of memory and applications might fail. Upward trends indicate a problem with application logic, garbage collection, or the tuning of one or more clients.
Shows the number of entries being evicted from the cluster, mirror group, or server.
Shows the number of expired entries found (and being evicted) on the TSA, mirror group, or server.
Shows the number of completed writes (or mutations) in the TSA or selected server. Operations can include evictions and expirations. Large-scale eviction or expiration operations can cause spikes in the operations rate (see the corresponding evictions and expirations statistical graphs). This rate is low in read-mostly situations, indicating that there are few writes and little data to evict. If this number drops or deviates regularly from an established baseline, it might indicate issues with network connections or overloaded servers.
When clients are selected, this statistic is reported as the Write Transaction Rate, tracking client-to-server write transactions.
A measure of how many objects (per second) are being faulted in from the TSA in response to application requests. Faults from off-heap or disk occur when an object is not available in a server's on-heap cache. Flushes occur when the heap or off-heap cache must clear data due to memory constraints. Objects being requested for the first time, or objects that have been flushed from off-heap memory before a request arrives, must be faulted in from disk. High rates could indicate inadequate memory allocation at the server.
BigMemory Max 4.1 provides support for a "Hybrid" mix of solid-state device (SSD) "flash drives" (an economical way to increase storage) along with the standard DRAM-based offheap storage. This Data Storage Usage graph, when compared to the Offheap Usage graph, shows that the hybrid maximum data storage, which includes both offheap memory and any "flash drives", can be on an entirely larger scale than off-heap alone.
Shows the amount, in megabytes or gigabytes, of maximum available off-heap memory (configured limit), the "OffHeap Reserved" (made available), and used off-heap memory (containing data). These statistics appear only if BigMemory is in effect.
The Events panel displays cluster events received by the Terracotta server array. You can use this panel to quickly view these events in one location in an easy-to-read format, without having to search the Terracotta logs.
The number of unread events is shown in a badge on each clustered connection's mini dashboard. The badge color indicates the severity of unread events: red for warnings and above, or gray if all unread events are of lower severity.
From Application Data > Events, the Level dropdown list allows you to select DEBUG, INFO, WARN, ERROR, or CRITICAL. Events will display that are equal to or higher than the level you select. For example:
For more information on specific events, see Event Types and Definitions.
The Administration panels provide information about the Terracotta cluster as well as tools for operations, including backing up cluster data.
Using subpanels, the Configuration panel displays the status, environment, and configuration information for the servers and clients selected in the Cluster Node menu. This information is useful for debugging and when reporting problems.
The Main subpanel displays the server status and a list of properties, including IP address, version, license (capabilities), and restartability and failover modes. A specific server must be selected to view this subpanel. Administrators can shut down servers from this panel.
The following additional subpanels are available:The Logs panel displays live logs for the server selected in the Cluster Node menu. Scroll up to pause the live update (or click Pause). Scroll down to the end of the log to restart the live update (or click Resume).
The Backup panel provides a control for creating a backup of cluster data. The following server configuration elements control backup execution:
<restartable enabled="true"/>
– Global setting required to be "true" for backups (for all servers) to be enabled. False by default.<data-backup>terracotta/backups
– server-level element setting the path for storing the backup files. The default path is shown.For more information on restoring from backups, see the Terracotta Server Array documentation.
You can reload the Terracotta configuration to add or remove servers. The configuration file must be edited and made available to every server and client before it can be reloaded successfully.
For more information on the Terracotta configuration and editing the servers section, see the Terracotta Server Array documentation.
Data lifecycle operations have been added to the TMC for more control and visibility of clustered data. This includes the following capabilities:
Only the administrator can see the "Destroy" feature. Use of this feature appears only in the TMC/TMS logs and not in server logs.
Troubleshooting Terracotta clusters with the TMC includes passive monitoring through viewing events and statistical trends using the monitoring panels as well as proactively investigating logs and thread dumps. If a cluster crosses certain resource thresholds, it might enter a mode of limited functionality to prevent a crash.
The TMC flashes warnings if the TSA enters throttled or restricted mode. These modes are initiated if memory resources drop below a certain threshold and endanger the operations of the cluster. The TSA can automatically recover from throttled mode after sufficient expired data is evicted. Under certain conditions recovery might fail and restricted mode is entered. You can provide temporary relief by clearing or disabling caches. However, if the TSA enters this mode, it is an indication that memory resources have been under-allocated. The cluster might need to be stopped and additional steps taken to ensure that enough memory is available to cover cluster operations.
You can get a snapshot of the state of each server and client in the Terracotta cluster using thread dumps. To display the console's thread-dumps feature, click Troubleshooting.
The thread-dump navigation pane lists completed thread dumps by date-time stamp. The contents of selected thread dumps are displayed in the right-side pane. To delete all shown thread dumps, click Clear All.
To generate a thread dump, follow these steps:
When complete, the thread dump appears in the thread-dumps navigation pane.
The entries correspond to servers and clients included in the thread dump.
Thread dumps are downloaded in the form of a zip file.
Servers that appear in the Scope menu but are not connected produce empty thread dumps.
To view the log of each server in the Terracotta cluster:
The logs will no longer update and will stop automatically scrolling. Click Resume (or scroll to the bottom) to restart the updating process.
Logs are downloaded as a zip file.
Click Settings on the toolbar to open a dialog where global TMC options can be configured.
Click the Polling tab to set the Polling Interval Seconds, which controls the granularity of polled statistical data. Note that shorter polling intervals can have a greater effect on the overall performance of the nodes being polled. To reset to default values, click Reset to Defaults.
Click the Security tab to configure security. If you choose to change the type of security used by the TMS, note the following:
For SSL connections, you can use a custom truststore instead of the default Java cacerts. The custom truststore must be located in the default directory specified in the Security panel.
See the account setup section and additional TMC documentation for more information on setting up security.