HBase client accesses ZooKeeper to obtain root-region-server DeadLock problem (zookeeper.ClientCnxn Unable to get data of zn

HBase client accesses ZooKeeper to obtain root-region-server DeadLock problem (zookeeper.ClientCnxn Unable to get data of zn

A failure occurred on November 28, 2012, "Unable to get data of znode/hbase/root-region-server"

The problem is rather strange. In the two computer rooms, only one computer room is faulty, and the five servers continue to fail, and the error logs are the same. The HBase client version used is 0.94.0

1) Analysis steps:

1 jstack jmap to see if there is a deadlock, block or memory overflow

jmap sees the memory recovery status, there is nothing abnormal, the memory and CPU usage are not much

 jstack pid> test.log pid: Unable to open socket file: target process not responding or HotSpot VM not loaded The -F option can be used when the target process is not responding

When this kind of error occurs, use the -F parameter jstack -l -F pid >jstack.log

jstack -l pid >jstack.log

At this time, jstack.log will not have useful information. Go to catalina.out to see it. Pay attention to use after the service is offline. It is easy to cause tomcat deadlock. Deadlock deadlock found in catalina.out

Found one Java-level deadlock:
=============================
"catalina-exec-800":
  waiting to lock monitor 0x000000005f1f6530 (object 0x0000000731902200, a java.lang.Object),
  which is held by "catalina-exec-710"
"catalina-exec-710":
  waiting to lock monitor 0x00002aaab9a05bd0 (object 0x00000007321f8708, a org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation),
  which is held by "catalina-exec-29-EventThread"
"catalina-exec-29-EventThread":
  waiting to lock monitor 0x000000005f9f0af0 (object 0x0000000732a9c7e0, a org.apache.hadoop.hbase.zookeeper.RootRegionTracker),
  which is held by "catalina-exec-710"
Java stack information for the threads listed above:
================================================== =
"catalina-exec-800":
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:943)
        -waiting to lock <0x0000000731902200> (a java.lang.Object)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:836)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:807)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:725)
        at org.apache.hadoop.hbase.client.ServerCallable.connect(ServerCallable.java:82)
        at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:162)
        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:685)
        at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:366)
        at com.weibo.api.commons.hbase.CustomHBase.get(CustomHBase.java:171)
        at com.weibo.api.commons.hbase.CustomHBase.get(CustomHBase.java:160)
        at com.weibo.api.commons.hbase.CustomHBase.get(CustomHBase.java:150)

"catalina-exec-710":
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.resetZooKeeperTrackers(HConnectionManager.java:599)
        -waiting to lock <0x00000007321f8708> (a org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.abort(HConnectionManager.java:1660)
        at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.getData(ZooKeeperNodeTracker.java:158)
        -locked <0x0000000732a9c7e0> (a org.apache.hadoop.hbase.zookeeper.RootRegionTracker)
        at org.apache.hadoop.hbase.zookeeper.RootRegionTracker.getRootRegionLocation(RootRegionTracker.java:62)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:821)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:801)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:933)
        ......
        at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:123)
        at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:99)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:894)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:948)
        -locked <0x0000000731902200> (a java.lang.Object)
  
 "catalina-exec-29-EventThread":
        at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.stop(ZooKeeperNodeTracker.java:98)
        -waiting to lock <0x0000000732a9c7e0> (a org.apache.hadoop.hbase.zookeeper.RootRegionTracker)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.resetZooKeeperTrackers(HConnectionManager.java:604)
        -locked <0x00000007321f8708> (a org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.abort(HConnectionManager.java:1660)
        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:374)
        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:271)
        at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
        at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)
Found 1 deadlock.

 The Interrupted exception reported in warn.log is caused by the above deadlock

2012-11-28 20:06:17 [WARN] hconnection-0x4384d0a47f41c63 Unable to get data of znode/hbase/root-region-server
java.lang.InterruptedException
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1253)
        at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1129)
        at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:264)
        at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:522)
        at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:498)
        at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.getData(ZooKeeperNodeTracker.java:156)
        at org.apache.hadoop.hbase.zookeeper.RootRegionTracker.getRootRegionLocation(RootRegionTracker.java:62)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:821)
        ......
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:807)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1042)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:836)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:807)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:725)
        at org.apache.hadoop.hbase.client.ServerCallable.connect(ServerCallable.java:82)
        at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:162)
        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:685)
        at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:366)
      
2012-11-28 20:06:17 [ERROR] [hbase_error]
java.io.IOException: Giving up after tries=1
        at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:192)
        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:685)
        at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:366)
 
Caused by: java.lang.InterruptedException: sleep interrupted
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:189)
        ... 13 more

https://issues.apache.org/jira/browse/HBASE-5060 HBase client is blocked forever, which is a bit similar to this problem, but does not solve this problem

2 Start another thread in the same process to access root-region-server

                   //check if zk is ok
                    ZooKeeper zk = null;
                    Watcher watch = new Watcher() {
                        public void process(WatchedEvent event) {
                        }
                    };
                    String zookeeperQuorum = wbobjectHBaseConfMap.get("hbase.zookeeper.quorum");
                    
                    if (StringUtils.isNotBlank(zookeeperQuorum)) {
                        try {
                            zk = new ZooKeeper(zookeeperQuorum, 30000, watch);
                            byte[] data = zk.getData("/hbase/root-region-server", watch, null);
                            ApiLogger.info(" get root-region-server success! ip:"
                                    + Util.toStr(data));
                        } catch (Exception e) {
                            ApiLogger.error(" get root-region-server error!" + e.getMessage());
                        } finally {
                            try {
                                zk.close();
                            } catch (InterruptedException e) {
                                ApiLogger.error("close zk error!");
                            }
                        }
                    }
                    

It is found that the independent thread can still work normally when the entire process is deadlocked. The zookeeper instance of HBase cannot be restored after an abnormal deadlock, which causes the scan operation to time out to 30s+ and cannot be restored. Normally, it should be at the ms level.

Therefore, it is considered that there is a problem when the HBase client connects to ZooKeeper. The process:

If there is network jitter or root table migration, cache table misses, re-obtain the root-region-server, the result fails, and the ZooKeeper reset operation

After studying the deadlock code

HConnectionImplementation finds that get root-region-server is abnormal, lock RootRegionTracker, try to lock itself, execute resetZooKeeperTrackers and call RootRegionTracker.stop to reset ZooKeeper

At the same time, when RootRegionTracker executes the stop method, it has locked HConnectionImplementation and needs to lock itself ( synchronized method), but it has been locked in HConnectionImplementation.

In the HConnectionImplementation code:

 this.rootRegionTracker = new RootRegionTracker(this.zooKeeper, this);

There is no timeout cancel code when calling, which leads to a deadlock waiting for two resource cycles

2) Simulation 

The following is a small program that simply simulates this deadlock problem:

import java.util.concurrent.TimeUnit;

import org.apache.hadoop.hbase.ServerName;

class ZooKeeperNodeTracker {
    private boolean stopped = false;
    private AbortAble hConnectionImplementation;
    public ZooKeeperNodeTracker(AbortAble hConnectionImplementation) {
        this.hConnectionImplementation = hConnectionImplementation;
    }

    public synchronized void stop() throws InterruptedException {
        this.stopped = true;
        System.out.println(Thread.currentThread()+"|"+Thread.currentThread().getId()+"stop zknode");
        TimeUnit.MICROSECONDS.sleep(100);
        notifyAll();
    }

    public boolean condition() {
        return stopped;
    }

    public boolean start() {
         stopped=false;
         return true;
    }
    
    public synchronized boolean getData(int i) throws InterruptedException {
       //error in get root region server
        if (i %100 ==0){
            hConnectionImplementation.resetZooKeeperTrackers();
            throw new InterruptedException("interrupted");            
        }
        return true;
    }

}

public class HConnectionManagerTest {

    static class HConnectionImplementation implements AbortAble{
        public HConnectionImplementation() {
            rootRegionTracker = new ZooKeeperNodeTracker(this);
        }

        private volatile ZooKeeperNodeTracker rootRegionTracker;
        @Override
        public synchronized void resetZooKeeperTrackers() {
            try{
                if (rootRegionTracker != null) {
                    rootRegionTracker.stop();
                    rootRegionTracker = null;
                    System.out.println(Thread.currentThread()+"|"+Thread.currentThread().getId()+"resetZooKeeperTrackers");
                }
            }catch(InterruptedException e){
                System.out.println(Thread.currentThread()+"----------resetZooKeeperTrackers Interrupted-----------");
                
            }
        }


        public void testGetData(String name) {
            int i = 1;
            while (i >0) {
                i++;
                try {
                    rootRegionTracker.getData(i);
                } catch (Exception e) {
                    resetZooKeeperTrackers();
                }
                if(i %100 ==0){
                    rootRegionTracker = new ZooKeeperNodeTracker(this);
                    System.out.println(name+" restart test");
                }
            }
        }
    }

    public static void main(String[] args) {

        final HConnectionImplementation hcon = new HConnectionImplementation();
        Thread scan1 = new Thread(new Runnable() {
            public void run() {
                hcon.testGetData("[scan1]");
            }
        });
        
        Thread scan2 = new Thread(new Runnable() {
            public void run() {
                hcon.testGetData("[scan2]");
            }
        });
        try{
            scan1.start();
            scan2.start();
            TimeUnit.SECONDS.sleep(2);
        }catch (InterruptedException e){
            System.out.println("----------testgetdata -------interrupt");
        }
    }

}

The class name constructors all simulate HBase Client and amplify the getData error situation. When two scan operations are performed concurrently, the root-region-server cannot be obtained during the previous scan. When stop() is performed in ZooKeeperNodeTracker, the latter A scan also started to execute resetZooKeeperTrackers() in HConnectionImplementation, and the two resources ZooKeeperNodeTracker and HConnectionImplementation were occupied and waiting for each, resulting in deadlock. The deadlock release of the simulation program can be changed

public synchronized boolean getData
The method is
public boolean getData
or
public synchronized void resetZooKeeperTrackers() {
            try{
                if (rootRegionTracker != null) {
                    rootRegionTracker.stop();
for
public void resetZooKeeperTrackers() {
            try{
                if (rootRegionTracker != null) {
            synchronized(rootRegionTracker){
              rootRegionTracker.stop();
Solve the problem by lifting the mutual exclusion condition

 3) Final solution

I consulted HBase developer Ted Yu at the Hadoop conference and said that 0.94.0 has many bugs that are unstable. It is recommended to upgrade to 0.94.2. By checking the relase note, the official two patch addresses have been fixed in 0.94.1 (through Analyze the hbase source code to find the corresponding point of the problem, and then look at the detailed modification record of the corresponding source code svn) 1 Solve the rpc thread jam by avoiding nested retry loops: https://issues.apache.org/jira/browse/HBASE- 6326   2 set by the region to address wait -root- root region tracker to avoid deadlock problem: https://issues.apache.org/jira/browse/HBASE-6115

Reference: https://cloud.tencent.com/developer/article/1067414 HBase client access ZooKeeper to get root-region-server DeadLock problem (zookeeper.ClientCnxn Unable to get data of zn-Cloud + Community-Tencent Cloud