ES7.9, repository-hdfs指定加载core-site.xml和hdfs-site.xml不生效问题
Elasticsearch | 作者 linghb | 发布于2020年12月21日 | 阅读数:1785
索引快照到hdfs,快照需要指定uri, 填写指定的 ip:port是可以进行快照的,但是由于点对点有风险,想用hadoop的HA,所以指定了dfs.nameservice,但是创建仓库的时候报:UnknownHostException,很自然联系到期没有读取到core-site.xml & hdfs-site.xml信息。
将这两个xml文件放到es的config目录 和 plugins中的repository-hdfs目录下,结果还是报:UnknownHostException
求指定如何让hdfs插件,加载自己指定的hadoop配置文件.
将这两个xml文件放到es的config目录 和 plugins中的repository-hdfs目录下,结果还是报:UnknownHostException
求指定如何让hdfs插件,加载自己指定的hadoop配置文件.
PUT _snapshot/test_hdfs_repo
{
"type": "hdfs",
"settings": {
"uri": "hdfs://R2:8888",
"load_defaults": true,
"conf_location": "core-site.xml,hdfs-site.xml",
"path": "/user/hongbo.ling/test_hdfs_backup",
"conf.dfs.client.read.shortcircuit": "true",
"conf.dfs.domain.socket.path": "/var/lib/hadoop-hdfs/dn_socket"
}
}
R2是定义的fs.defaultF,报错如下:
"caused_by" : {
"type" : "repository_exception",
"reason" : "[test_hdfs_repo] cannot create blob store",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "java.net.UnknownHostException: R2",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "R2"
}
}
}
2 个回复
locatelli
赞同来自:
shwtz - 学物理想做演员的IT男
赞同来自:
"uri": "hdfs://nameservice1",
"conf.dfs.nameservices": "nameservice1",
"conf.dfs.client.failover.proxy.provider.nameservice1": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"conf.dfs.ha.namenodes.nameservice1": "namenode1,namenode2",
"conf.dfs.namenode.rpc-address.nameservice1.namenode1": "node1:8020",
"conf.dfs.namenode.rpc-address.nameservice1.namenode2": "node2:8020"