[2018-09-07T11:50:20,401][WARN ][logstash.outputs.webhdfs ] webhdfs write caused an exception: {"RemoteException":{"exception":"AlreadyBeingCreatedException","javaClassName":"org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException","message":"Failed to APPEND_FILE /gnss/VehicleGpsRealtime/2018/09/06/logstashData.10.log for DFSClient_NONMAPREDUCE_-1340735009_39 on 192.168.0.244 because this file lease is currently owned by DFSClient_NONMAPREDUCE_-1875701628_35 on 192.168.0.244\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2454)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:117)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2496)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:776)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:437)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)\n"}}. Maybe you should increase retry_interval or reduce number of workers. Retrying...
这种警告问题,可以忽律吗
这种警告问题,可以忽律吗
2 个回复
rochy - rochy_he
赞同来自:
此外你可以设置 single_file_per_thread = true 看能不能避免这个错误
https://www.elastic.co/guide/e ... hread
GLC
赞同来自:
single_file_per_thread这个属性 似乎是多个logstash同时写hdfs。
我这边只启了一个logstash。