Problem: Spark jobs fail while writing to HBase and the HBase region servers cannot flush the write requests fast enough to disk. The executor logs will have “RegionTooBusy” exceptions.
Cause: This error is most likely to be the result of insufficient memory being allocated to the HBase region server.
Resolution: Sites that have HBase installed on a separate cluster can use a cluster management tool (Apache Ambari or similar) to tune the settings for the following configuration variables:
Note: At sites with Tamr and HBase deployed on a single node, Tamr does not recommend that you make changes to the default values for these configuration variables.
Updated over 1 year ago