What To Do When Restore On a New Server Failed?

If you are trying to restore an instance of Tamr on a new server and are running into issues, check out the suggested troubleshooting steps below depending on your version of Tamr.

Versions pre v2019.008:

When trying to restore an instance on a new server, you might get an error message indicating a failure in finding the directory. Often this is caused by a mismatch in the restore locations in Zookeeper and in local-env.sh.

To fix this, ensure that the variables in Zookeeper and local-env match and then check that the path is valid.

Versions post v2019.008:

The location of the backup files needs to be in a place from which the destination instance can read them, and inside of the TAMR_UNIFY_BACKUP_URI for the destination instance. So to avoid issues please make sure you place the backup files as set in TAMR_UNIFY_BACKUP_URI. Also ensure the functional Tamr user has read/write access on the directory set in TAMR_UNIFY_BACKUP_URI.

The backup/restore utility will use the /tmp directory by default and you might get issues if it runs out of disk space. To fix this, please change the environment variable TAMR_UNIFY_BACKUP_HADOOP_TMP_DIR to a drive with more space.

When the variable TAMR_PG_RESTORE_BINARY is not set properly that may also cause issues.

If the User has added roles to Postgres and granted them permissions on the Tamr database, the Postgres backup cannot be restored unless all those roles exist in the target Postgres instance. Adding the --no-acl flag to the pg_restore would fix this.

Important:

  • If you are using a distributed file system to store the backup files, you can restore from the backup to any destination instance without having to physically transfer backup files to the destination instance.
  • Restoring Tamr from a backup deletes all data in the destination instance and automatically restarts Tamr.
  • Restoring Tamr from a backup resets the password for the “system” user to its default value.

If the problem persists, contact us at [email protected] or go here for more information.


What’s Next