HomeTamr Core GuidesTamr Core API Reference
Tamr Core GuidesTamr Core API ReferenceTamr Core TutorialsEnrichment API ReferenceSupport Help CenterLog In

Logging in Single-Node Deployments

Use, locations, and configuration options for logs produced by Tamr Core and its microservices.

Tamr Core Logs

Tamr Core saves logs in the  <tamr_home_directory>/tamr/logs directory of the Tamr Core installation. Tamr Core logs are created for each of the microservices and the log levels of each can be configured by setting configuration variables with the unify-admin utility, see Configuring Tamr Core. For example the log level for "unify", the microservice for the front-end user interface of Tamr Core, can be adjusted using TAMR_UNIFY_LOG_LEVEL. By default, all log levels are set to INFO

Each microservice generates a <microservice>.log file and a <microservice>.request.log file. The <microservice>.log files contain broad information about the microservice, while the <microservice>.request.log tracks the API requests made to that microservice (including API requests generated by user actions in the UI).

Additional logs can also be generated depending on the microservice. If generated, the <microservice>.error.log contains ERROR level logs and <microservice>.out.log contains the standard output.

DMS Logs

Data Movement Service (DMS) logs can be found in the $TAMR_UNIFY_HOME/tamr/logs directory with the format dms-*.log.

Tip: The dms-all.out.log file is a good starting point when reviewing logs.

To verify that the DMS has started successfully, after the service has started wait up to 30 seconds or check the logs for a message similar to this example:

INFO  com.tamr.apps.distro.AppsService - Framework application ready

You can also submit a request to the /api/dms/service/health API endpoint and look for a status of 200 to be returned.

Collecting Logs

The collect-logs.sh script within <tamr_home_directory>/tamr collects logs and sends them to a zipped file, which you can then send to Tamr Support or analyze outside of the server more easily. If the value for -d is not specified, the default is 5 days. If the output directory is not specified, the default is <tamr_home_directory>/tamr.

./collect-logs.sh -d <days to collect> <output directory>


If you encounter a problem with a microservice, it may be helpful to access the user interface for that service. See Single-Node Deployments for default ports for single-node installations.

You can set the period of time that Tamr Core retains logs for microservices with the TAMR_LOG_RETENTION_DAYS configuration variable, which defaults to 30.


Yarn logs are located in <tamr_home_directory>/tamr/logs and consist of the following:

yarn-<tamr_user>-local-nodemanager-<system name>.log - yarn node manager logs.

yarn-<tamr-user>-local-resourcemanager-<system name>.log - yarn resource manager logs.


SparkEventLogs are located in <tamr_home_directory>/tamr/unify-data/job/sparkEventLogs. These logs are related to Apache Spark jobs that Tamr Core runs and can be used with the Spark History Server to analyze Spark jobs.


HBase logs are located in <tamr_home_directory>/hbase-1.3.1/logs.


Elasticsearch logs are located in <tamr_home_directory>/tamr/logs under the name elasticsearch-procurify.log.

If Elasticsearch appears to be running slowly you can have it log the slowest queries by specifying how long the query takes and what level of verbosity should be logged. An example cURL query:

curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{
  "index.search.slowlog.level" : "info",
  "index.search.slowlog.threshold.fetch.debug" : "500ms",
  "index.search.slowlog.threshold.fetch.info" : "800ms",
  "index.search.slowlog.threshold.fetch.trace" : "200ms",
  "index.search.slowlog.threshold.fetch.warn" : "1s",
  "index.search.slowlog.threshold.query.debug" : "2s",
  "index.search.slowlog.threshold.query.info" : "5s",
  "index.search.slowlog.threshold.query.trace" : "500ms",
  "index.search.slowlog.threshold.query.warn" : "10s"
}' --header "Content-Type: application/json" 

You can get high level information about the indices by running:

curl -X GET "localhost:9200/_cluster/stats?human&pretty"

You can get information about the size of each shard in each index by running:

curl -X GET "localhost:9200/_cat/shards"

Did this page help you?