配置好webhdfs_helper.rb ,运行logstash报如下错:
Webhdfs check request failed. (namenode: 10.0.0.35:50070, Exception: undefined method `read_uint32' for #<FFI::MemoryPointer address=0x12f10d0 size=4>) {:level=>:error}
The error reported is:
undefined method `read_uint32' for #<FFI::MemoryPointer address=0x12f10d0 size=4>
Webhdfs check request failed. (namenode: 10.0.0.35:50070, Exception: undefined method `read_uint32' for #<FFI::MemoryPointer address=0x12f10d0 size=4>) {:level=>:error}
The error reported is:
undefined method `read_uint32' for #<FFI::MemoryPointer address=0x12f10d0 size=4>
3 个回复
dengsc
赞同来自: yj7778826
github社区有人提过
https://github.com/gpaggi/gssa ... a72b9
medcl - 今晚打老虎。
赞同来自:
https://www.elastic.co/guide/e ... eytab
可以先确定一下 Hadoop 集群是否正常运行和配置正确。
使用 http api 看看能不能正常通过 kerberos 正常访问 hdfs。
和这个问题很像:https://discuss.pivotal.io/hc/ ... rking
sanshi123
赞同来自:
output {
stdout { codec => rubydebug }
webhdfs{
host => "10.16.1.17"
standby_host => "10.16.1.16"
port => 9870
standby_port => 9870
path => "/origin_data/test.log"
user => "czj@DEV.COM"
use_kerberos_auth => "true"
kerberos_keytab => "/etc/czj.keytab"
retry_interval => 30
codec => plain {
format => "%{message}"
}
}
}
[2021-04-07T17:11:52,428][WARN ][logstash.outputs.webhdfs ][main] webhdfs write caused an exception: gss_init_sec_context did not return GSS_S_COMPLETE: Unspecified GSS failure. Minor code may provide more information
Ticket expired
. Maybe you should increase retry_interval or reduce number of workers. Retrying...