Start a Conversation

Unsolved

This post is more than 5 years old

1946

March 3rd, 2017 01:00

Hive/HBase and some other components service check failed after move from HDFS to ECS

Hello, I am trying the integration of BigInsights(use Ambari) and ECS now, the guide I used to set up the integration env is https://www.emc.com/collateral/TechnicalDocument/docu79368.pdf, page 135 - 171, especially page 147-152 and 164-171

After that, the HDFS/MapReduce Service check pass,

but

1. Hive Service check ran by user ambari-qa failed with error:

<-------

2017-03-03 01:06:57,169 ERROR [HiveServer2-Handler-Pool: Thread-43]: vipr.ViPRFileSystemClientBase (ViPRFileSystemClientBase.java:checkResponse(967)) - Permissions failure for request: User: anonymous (auth:SIMPLE), host: bucket1.ns1.Site1, namespace: ns1, bucket: bucket1

2017-03-03 01:06:57,171 ERROR [HiveServer2-Handler-Pool: Thread-43]: vipr.ViPRFileSystemClientBase (ViPRFileSystemClientBase.java:checkResponse(969)) - Request message sent: MkDirRequestMessage[kind=MKDIR_REQUEST,namespace=ns1,bucket=bucket1,path=/tmp/hive/anonymous,hdfsTrustedStatus=HDFS_USER_NOT_TRUSTED,permissions=rwx------,createParent=true]

2017-03-03 01:06:57,172 WARN  [HiveServer2-Handler-Pool: Thread-43]: thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(311)) - Error opening session:

org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.security.AccessControlException: ERROR_INSUFFICIENT_PERMISSIONS

        at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:266)

        at org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:204)

--------->

even after I add "ambari-qa" as the Object owner and add it to the User ACLs of the bucket.

I also tried to modify the fs.viprfs.auth.anonymous_translation from LOCAL_USER to NONE, but did not work.

Also tried to add ambari-qa as the local user of ECS and the host of docker container which runs ECS, still did not work.

Anybody know why ECS treats ambari-qa as anonymous?

2. HBase Service check failed with error:

<-------

ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2330)

at org.apache.hadoop.hbase.master.HMaster.checkNamespaceManagerReady(HMaster.java:2335)

at org.apache.hadoop.hbase.master.HMaster.ensureNamespaceExists(HMaster.java:2544)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1536)

at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:471)

at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

at java.lang.Thread.run(Thread.java:745)

--------->

I searched and seems it related to there is something wrong with the hbase.rootdir which I set to viprfs://bucket1.ns1.Site1/apps/hbase/data, any other operations I missed except modifying the hbase.rootdir?


3. Oozie Service Check failed with "Error: E0904 : E0904: Scheme [viprfs] not supported in uri [viprfs://bucket1.ns1.Site1/user/ambari-qa/examples/apps/no-op] Invalid sub-command: Missing argument for option: info"


4. Slider Service Check failed with Exception: java.lang.ClassNotFoundException: Class com.emc.hadoop.fs.vipr.ViPRFileSystem not found


5. Titan Service Check failed with Exception: "Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.emc.hadoop.fs.vipr.ViPRFileSystem not found"


For 3#, does ECS support Oozie?

For 4# 5#, does ECS support Slider/Titan, if so, seems I also need to add hdfsclient jar of ECS to their classpath, where to do that?


Thanks!

22 Posts

March 3rd, 2017 06:00

I'm not sure, but what about your core-site.xml file? Could you attach that?

most specifically the fs.viprfs.auth.anonymous_translation property

3 Posts

March 5th, 2017 23:00

coneryj   Thanks for you reply, the core-site.xml file:

 

 

   

   

      fs.AbstractFileSystem.viprfs.impl

      com.emc.hadoop.fs.vipr.ViPRAbstractFileSystem

   

   

   

      fs.defaultFS

      viprfs://bucket1.ns1.Site1/

      true

   

   

   

      fs.permissions.umask-mode

      022

   

   

   

      fs.trash.interval

      360

   

   

   

      fs.vipr.installation.Site1.hosts

      9.30.104.219

   

   

   

      fs.vipr.installation.Site1.resolution

      dynamic

   

   

   

      fs.vipr.installation.Site1.resolution.dynamic.time_to_live_ms

      900000

   

   

   

      fs.vipr.installations

      Site1

   

   

   

      fs.viprfs.auth.anonymous_translation

      LOCAL_USER

   

   

   

      fs.viprfs.auth.identity_translation

      NONE

   

   

   

      fs.viprfs.impl

      com.emc.hadoop.fs.vipr.ViPRFileSystem

   

   

   

      ha.failover-controller.active-standby-elector.zk.op.retries

      120

   

   

   

      hadoop.http.authentication.simple.anonymous.allowed

      true

   

   

   

      hadoop.proxyuser.bigsql.groups

      *

   

   

   

      hadoop.proxyuser.bigsql.hosts

      *

   

   

   

      hadoop.proxyuser.hcat.groups

      *

   

   

   

      hadoop.proxyuser.hcat.hosts

      bigaperf194.svl.ibm.com

   

   

   

      hadoop.proxyuser.hdfs.groups

      *

   

   

   

      hadoop.proxyuser.hdfs.hosts

      *

   

   

   

      hadoop.proxyuser.hive.groups

      *

   

   

   

      hadoop.proxyuser.hive.hosts

      bigaperf194.svl.ibm.com

   

   

   

      hadoop.proxyuser.oozie.groups

      *

   

   

   

      hadoop.proxyuser.oozie.hosts

      bigaperf194.svl.ibm.com

   

   

   

      hadoop.proxyuser.root.groups

      *

   

   

   

      hadoop.proxyuser.root.hosts

      bigaperf193.svl.ibm.com

   

   

   

      hadoop.security.auth_to_local

      DEFAULT

   

   

   

      hadoop.security.authentication

      simple

   

   

   

      hadoop.security.authorization

      false

   

   

   

      io.compression.codecs

      org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec

   

   

   

      io.file.buffer.size

      131072

   

   

   

      io.serializations

      org.apache.hadoop.io.serializer.WritableSerialization

   

   

   

      ipc.client.connect.max.retries

      50

   

   

   

      ipc.client.connection.maxidletime

      30000

   

   

   

      ipc.client.idlethreshold

      8000

   

   

   

      ipc.server.tcpnodelay

      true

   

   

   

      mapreduce.jobtracker.webinterface.trusted

      false

   

   

   

      net.topology.script.file.name

      /etc/hadoop/conf/topology_script.py

   

   

 

5 Practitioner

 • 

274.2K Posts

March 13th, 2017 13:00

To further investigate the Hive error, you will need to review the error logs on each of the ECS nodes. Run the following command on any ECS node immediately after the failure to gather the logs.

admin@ecs-1:~> viprexec -i -c "tail -100 /opt/storageos/logs/dataheadsvc-error.log" > /tmp/error.log


For HBase, you will need to delete the HBase metadata in Zookeeper.

hbase zkcli

[zk] rmr /hbase-unsecure

Oozie, Slider, and Titan are not supported with ECS.

3 Posts

March 30th, 2017 14:00

Hello Claudio,

Thanks for your reply, the HBase suggestion does resolve the HBase problem. And also the info "Oozie, Slider, and Titan are not supported with ECS" is important for us.

And for Hive, the error log on ECS node is

bigaperf197:/opt/storageos/tools # vi /opt/storageos/logs/dataheadsvc-error.log

2017-03-30T21:38:07,653 [pool-90-thread-14587-091e68db:15a88fc4efd:a6a6:38] ERROR  FileSystemAc

cessHelper.java (line 1939) nfsProcessOperation : for method nfsCreateDirectory failed to proce

ss path : tmp/hive/anonymous

2017-03-30T21:38:07,654 [pool-90-thread-14587] ERROR  RealBlobEngine.java (line 357) Error crea

ting directory: 'tmp/hive/anonymous'. '[code=ERROR_ACCESS_DENIED, message='ERROR_ACCESS_DENIED'

]'

Seems it's back to my original question that:

even after I add "ambari-qa" as the Object owner and add it to the User ACLs of the bucket.

tried to modify the fs.viprfs.auth.anonymous_translation from LOCAL_USER to NONE,

Also tried to add ambari-qa as the local user of ECS and the host of docker container which runs ECS, still did not work.


Why ECS treats ambari-qa as anonymous? any suggestion? Thanks in advance!

No Events found!

Top