
社区日报 第890期 (2020-03-23)
http://t.cn/A6zdswBR
2.elasticsearch 6.x 升级调研报告
http://t.cn/A6zgvvMR
3.Elasticsearch Machine Learning AIOps 实践
http://t.cn/A6zgvkek
社区线上直播系列活动第一期:存储与计算分离,京东在 Elasticsearch 上的实践:
http://t.cn/A6zeV2Hv
编辑:cyberdak
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
http://t.cn/A6zdswBR
2.elasticsearch 6.x 升级调研报告
http://t.cn/A6zgvvMR
3.Elasticsearch Machine Learning AIOps 实践
http://t.cn/A6zgvkek
社区线上直播系列活动第一期:存储与计算分离,京东在 Elasticsearch 上的实践:
http://t.cn/A6zeV2Hv
编辑:cyberdak
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

社区日报 第889期 (2020-03-22)
http://t.cn/A6zrXWFJ
2.在启用了X-Pack安全性的Docker上设置Elasticsearch和Kibana。
http://t.cn/A6zr0P8I
3.(自备梯子)冠状病毒:锤子和舞蹈。
http://t.cn/A6zmXEr2
编辑:至尊宝
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
http://t.cn/A6zrXWFJ
2.在启用了X-Pack安全性的Docker上设置Elasticsearch和Kibana。
http://t.cn/A6zr0P8I
3.(自备梯子)冠状病毒:锤子和舞蹈。
http://t.cn/A6zmXEr2
编辑:至尊宝
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

社区日报 第888期 (2020-03-21)

社区日报 第887期 (2020-03-20)
http://t.cn/A6zmLUGH
2、Elasticsearch 智能化运维思路
http://t.cn/A6zmLbAp
3、Python Elasticsearch DSL 使用笔记
http://t.cn/A6zmLGdf
编辑:铭毅天下
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
http://t.cn/A6zmLUGH
2、Elasticsearch 智能化运维思路
http://t.cn/A6zmLbAp
3、Python Elasticsearch DSL 使用笔记
http://t.cn/A6zmLGdf
编辑:铭毅天下
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

社区日报 第885期 (2020-03-18)
http://t.cn/A6zYYqLW
2、滴滴离线索引快速构建 FastIndex 架构实践
http://t.cn/A6zYYITC
3、58同城 Elasticsearch 应用及平台建设实践
http://t.cn/A6zYYar0
编辑:江水
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
http://t.cn/A6zYYqLW
2、滴滴离线索引快速构建 FastIndex 架构实践
http://t.cn/A6zYYITC
3、58同城 Elasticsearch 应用及平台建设实践
http://t.cn/A6zYYar0
编辑:江水
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
收起阅读 »

社区日报 第886期 (2020-03-19)
http://t.cn/A6zHpmzj
2.Elasticsearch和MySQL查询原理分析与对比
http://t.cn/A6zHpu1E
3.Spark SQL读写 ES7.x 及问题总结
http://t.cn/A6zHp1qi
编辑:金桥
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
http://t.cn/A6zHpmzj
2.Elasticsearch和MySQL查询原理分析与对比
http://t.cn/A6zHpu1E
3.Spark SQL读写 ES7.x 及问题总结
http://t.cn/A6zHp1qi
编辑:金桥
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

【哔哩哔哩】上海 B站招聘ES/Solr/Lucene/OLAP工程师
联系电话/微信:17621966518
希望你具有如下特征:
* 熟悉Lucene源码,或熟悉其他OLAP开源组件
* 具有强大的编码能力
联系电话/微信:17621966518
希望你具有如下特征:
* 熟悉Lucene源码,或熟悉其他OLAP开源组件
* 具有强大的编码能力

社区日报 第883期 (2020-03-16)
http://t.cn/A6zNsYBp
2.探究 | Elasticsearch 与传统数据库界限
http://t.cn/A6zNsetv
3.分布式日志链路追踪:skywalking+elasticsearch部署实践(5.x)
http://t.cn/A6zp7b4T
编辑:wt
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
http://t.cn/A6zNsYBp
2.探究 | Elasticsearch 与传统数据库界限
http://t.cn/A6zNsetv
3.分布式日志链路追踪:skywalking+elasticsearch部署实践(5.x)
http://t.cn/A6zp7b4T
编辑:wt
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

基于ES的aliyun-knn插件,开发的以图搜图搜索引擎
基于ES的aliyun-knn插件,开发的以图搜图搜索引擎
本例是基于Elasticsearch6.7 版本, 安装了aliyun-knn插件;设计的图片向量特征为512维度.
如果自建ES,是无法使用aliyun-knn插件的,自建建议使用ES7.x版本,并按照fast-elasticsearch-vector-scoring插件(https://github.com/lior-k/fast-elasticsearch-vector-scoring/)
由于我的python水平有限,文中设计到的图片特征提取,使用了yongyuan.name的VGGNet库,再此表示感谢!
一、 ES设计
1.1 索引结构
# 创建一个图片索引
PUT images_v2
{
"aliases": {
"images": {}
},
"settings": {
"index.codec": "proxima",
"index.vector.algorithm": "hnsw",
"index.number_of_replicas":1,
"index.number_of_shards":3
},
"mappings": {
"_doc": {
"properties": {
"feature": {
"type": "proxima_vector",
"dim": 512
},
"relation_id": {
"type": "keyword"
},
"image_path": {
"type": "keyword"
}
}
}
}
}
1.2 DSL语句
GET images/_search
{
"query": {
"hnsw": {
"feature": {
"vector": [255,....255],
"size": 3,
"ef": 1
}
}
},
"from": 0,
"size": 20,
"sort": [
{
"_score": {
"order": "desc"
}
}
],
"collapse": {
"field": "relation_id"
},
"_source": {
"includes": [
"relation_id",
"image_path"
]
}
}
二、图片特征
extract_cnn_vgg16_keras.py
# -*- coding: utf-8 -*-
# Author: yongyuan.name
import numpy as np
from numpy import linalg as LA
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
class VGGNet:
def __init__(self):
# weights: 'imagenet'
# pooling: 'max' or 'avg'
# input_shape: (width, height, 3), width and height should >= 48
self.input_shape = (224, 224, 3)
self.weight = 'imagenet'
self.pooling = 'max'
self.model = VGG16(weights = self.weight, input_shape = (self.input_shape[0], self.input_shape[1], self.input_shape[2]), pooling = self.pooling, include_top = False)
self.model.predict(np.zeros((1, 224, 224 , 3)))
'''
Use vgg16 model to extract features
Output normalized feature vector
'''
def extract_feat(self, img_path):
img = image.load_img(img_path, target_size=(self.input_shape[0], self.input_shape[1]))
img = image.img_to_array(img)
img = np.expand_dims(img, axis=0)
img = preprocess_input(img)
feat = self.model.predict(img)
norm_feat = feat[0]/LA.norm(feat[0])
return norm_feat
# 获取图片特征
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
file_path = "./demo.jpg"
queryVec = model.extract_feat(file_path)
feature = queryVec.tolist()
三、将图片特征写入ES
helper.py
import re
import urllib.request
def strip(path):
"""
需要清洗的文件夹名字
清洗掉Windows系统非法文件夹名字的字符串
:param path:
:return:
"""
path = re.sub(r'[?\\*|“<>:/]', '', str(path))
return path
def getfilename(url):
"""
通过url获取最后的文件名
:param url:
:return:
"""
filename = url.split('/')[-1]
filename = strip(filename)
return filename
def urllib_download(url, filename):
"""
下载
:param url:
:param filename:
:return:
"""
return urllib.request.urlretrieve(url, filename)
train.py
# coding=utf-8
import mysql.connector
import os
from helper import urllib_download, getfilename
from elasticsearch5 import Elasticsearch, helpers
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
http_auth = ("elastic", "123455")
es = Elasticsearch("http://127.0.0.1:9200", http_auth=http_auth)
mydb = mysql.connector.connect(
host="127.0.0.1", # 数据库主机地址
user="root", # 数据库用户名
passwd="123456", # 数据库密码
database="images"
)
mycursor = mydb.cursor()
imgae_path = "./images/"
def get_data(page=1):
page_size = 20
offset = (page - 1) * page_size
sql = """
SELECT id, relation_id, photo FROM images LIMIT {0},{1}
"""
mycursor.execute(sql.format(offset, page_size))
myresult = mycursor.fetchall()
return myresult
def train_image_feature(myresult):
indexName = "images"
photo_path = "http://域名/{0}"
actions = []
for x in myresult:
id = str(x[0])
relation_id = x[1]
# photo = x[2].decode(encoding="utf-8")
photo = x[2]
full_photo = photo_path.format(photo)
filename = imgae_path + getfilename(full_photo)
if not os.path.exists(filename):
try:
urllib_download(full_photo, filename)
except BaseException as e:
print("gid:{0}的图片{1}未能下载成功".format(gid, full_photo))
continue
if not os.path.exists(filename):
continue
try:
feature = model.extract_feat(filename).tolist()
action = {
"_op_type": "index",
"_index": indexName,
"_type": "_doc",
"_id": id,
"_source": {
"relation_id": relation_id,
"feature": feature,
"image_path": photo
}
}
actions.append(action)
except BaseException as e:
print("id:{0}的图片{1}未能获取到特征".format(id, full_photo))
continue
# print(actions)
succeed_num = 0
for ok, response in helpers.streaming_bulk(es, actions):
if not ok:
print(ok)
print(response)
else:
succeed_num += 1
print("本次更新了{0}条数据".format(succeed_num))
es.indices.refresh(indexName)
page = 1
while True:
print("当前第{0}页".format(page))
myresult = get_data(page=page)
if not myresult:
print("没有获取到数据了,退出")
break
train_image_feature(myresult)
page += 1
四、搜索图片
import requests
import json
import os
import time
from elasticsearch5 import Elasticsearch
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
http_auth = ("elastic", "123455")
es = Elasticsearch("http://127.0.0.1:9200", http_auth=http_auth)
#上传图片保存
upload_image_path = "./runtime/"
upload_image = request.files.get("image")
upload_image_type = upload_image.content_type.split('/')[-1]
file_name = str(time.time())[:10] + '.' + upload_image_type
file_path = upload_image_path + file_name
upload_image.save(file_path)
# 计算图片特征向量
queryVec = model.extract_feat(file_path)
feature = queryVec.tolist()
# 删除图片
os.remove(file_path)
# 根据特征向量去ES中搜索
body = {
"query": {
"hnsw": {
"feature": {
"vector": feature,
"size": 5,
"ef": 10
}
}
},
# "collapse": {
# "field": "relation_id"
# },
"_source": {"includes": ["relation_id", "image_path"]},
"from": 0,
"size": 40
}
indexName = "images"
res = es.search(indexName, body=body)
# 返回的结果,最好根据自身情况,将得分低的过滤掉...经过测试, 得分在0.65及其以上的,比较符合要求
五、依赖的包
mysql_connector_repackaged
elasticsearch
Pillow
tensorflow
requests
pandas
Keras
numpy
基于ES的aliyun-knn插件,开发的以图搜图搜索引擎
本例是基于Elasticsearch6.7 版本, 安装了aliyun-knn插件;设计的图片向量特征为512维度.
如果自建ES,是无法使用aliyun-knn插件的,自建建议使用ES7.x版本,并按照fast-elasticsearch-vector-scoring插件(https://github.com/lior-k/fast-elasticsearch-vector-scoring/)
由于我的python水平有限,文中设计到的图片特征提取,使用了yongyuan.name的VGGNet库,再此表示感谢!
一、 ES设计
1.1 索引结构
# 创建一个图片索引
PUT images_v2
{
"aliases": {
"images": {}
},
"settings": {
"index.codec": "proxima",
"index.vector.algorithm": "hnsw",
"index.number_of_replicas":1,
"index.number_of_shards":3
},
"mappings": {
"_doc": {
"properties": {
"feature": {
"type": "proxima_vector",
"dim": 512
},
"relation_id": {
"type": "keyword"
},
"image_path": {
"type": "keyword"
}
}
}
}
}
1.2 DSL语句
GET images/_search
{
"query": {
"hnsw": {
"feature": {
"vector": [255,....255],
"size": 3,
"ef": 1
}
}
},
"from": 0,
"size": 20,
"sort": [
{
"_score": {
"order": "desc"
}
}
],
"collapse": {
"field": "relation_id"
},
"_source": {
"includes": [
"relation_id",
"image_path"
]
}
}
二、图片特征
extract_cnn_vgg16_keras.py
# -*- coding: utf-8 -*-
# Author: yongyuan.name
import numpy as np
from numpy import linalg as LA
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
class VGGNet:
def __init__(self):
# weights: 'imagenet'
# pooling: 'max' or 'avg'
# input_shape: (width, height, 3), width and height should >= 48
self.input_shape = (224, 224, 3)
self.weight = 'imagenet'
self.pooling = 'max'
self.model = VGG16(weights = self.weight, input_shape = (self.input_shape[0], self.input_shape[1], self.input_shape[2]), pooling = self.pooling, include_top = False)
self.model.predict(np.zeros((1, 224, 224 , 3)))
'''
Use vgg16 model to extract features
Output normalized feature vector
'''
def extract_feat(self, img_path):
img = image.load_img(img_path, target_size=(self.input_shape[0], self.input_shape[1]))
img = image.img_to_array(img)
img = np.expand_dims(img, axis=0)
img = preprocess_input(img)
feat = self.model.predict(img)
norm_feat = feat[0]/LA.norm(feat[0])
return norm_feat
# 获取图片特征
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
file_path = "./demo.jpg"
queryVec = model.extract_feat(file_path)
feature = queryVec.tolist()
三、将图片特征写入ES
helper.py
import re
import urllib.request
def strip(path):
"""
需要清洗的文件夹名字
清洗掉Windows系统非法文件夹名字的字符串
:param path:
:return:
"""
path = re.sub(r'[?\\*|“<>:/]', '', str(path))
return path
def getfilename(url):
"""
通过url获取最后的文件名
:param url:
:return:
"""
filename = url.split('/')[-1]
filename = strip(filename)
return filename
def urllib_download(url, filename):
"""
下载
:param url:
:param filename:
:return:
"""
return urllib.request.urlretrieve(url, filename)
train.py
# coding=utf-8
import mysql.connector
import os
from helper import urllib_download, getfilename
from elasticsearch5 import Elasticsearch, helpers
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
http_auth = ("elastic", "123455")
es = Elasticsearch("http://127.0.0.1:9200", http_auth=http_auth)
mydb = mysql.connector.connect(
host="127.0.0.1", # 数据库主机地址
user="root", # 数据库用户名
passwd="123456", # 数据库密码
database="images"
)
mycursor = mydb.cursor()
imgae_path = "./images/"
def get_data(page=1):
page_size = 20
offset = (page - 1) * page_size
sql = """
SELECT id, relation_id, photo FROM images LIMIT {0},{1}
"""
mycursor.execute(sql.format(offset, page_size))
myresult = mycursor.fetchall()
return myresult
def train_image_feature(myresult):
indexName = "images"
photo_path = "http://域名/{0}"
actions = []
for x in myresult:
id = str(x[0])
relation_id = x[1]
# photo = x[2].decode(encoding="utf-8")
photo = x[2]
full_photo = photo_path.format(photo)
filename = imgae_path + getfilename(full_photo)
if not os.path.exists(filename):
try:
urllib_download(full_photo, filename)
except BaseException as e:
print("gid:{0}的图片{1}未能下载成功".format(gid, full_photo))
continue
if not os.path.exists(filename):
continue
try:
feature = model.extract_feat(filename).tolist()
action = {
"_op_type": "index",
"_index": indexName,
"_type": "_doc",
"_id": id,
"_source": {
"relation_id": relation_id,
"feature": feature,
"image_path": photo
}
}
actions.append(action)
except BaseException as e:
print("id:{0}的图片{1}未能获取到特征".format(id, full_photo))
continue
# print(actions)
succeed_num = 0
for ok, response in helpers.streaming_bulk(es, actions):
if not ok:
print(ok)
print(response)
else:
succeed_num += 1
print("本次更新了{0}条数据".format(succeed_num))
es.indices.refresh(indexName)
page = 1
while True:
print("当前第{0}页".format(page))
myresult = get_data(page=page)
if not myresult:
print("没有获取到数据了,退出")
break
train_image_feature(myresult)
page += 1
四、搜索图片
import requests
import json
import os
import time
from elasticsearch5 import Elasticsearch
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
http_auth = ("elastic", "123455")
es = Elasticsearch("http://127.0.0.1:9200", http_auth=http_auth)
#上传图片保存
upload_image_path = "./runtime/"
upload_image = request.files.get("image")
upload_image_type = upload_image.content_type.split('/')[-1]
file_name = str(time.time())[:10] + '.' + upload_image_type
file_path = upload_image_path + file_name
upload_image.save(file_path)
# 计算图片特征向量
queryVec = model.extract_feat(file_path)
feature = queryVec.tolist()
# 删除图片
os.remove(file_path)
# 根据特征向量去ES中搜索
body = {
"query": {
"hnsw": {
"feature": {
"vector": feature,
"size": 5,
"ef": 10
}
}
},
# "collapse": {
# "field": "relation_id"
# },
"_source": {"includes": ["relation_id", "image_path"]},
"from": 0,
"size": 40
}
indexName = "images"
res = es.search(indexName, body=body)
# 返回的结果,最好根据自身情况,将得分低的过滤掉...经过测试, 得分在0.65及其以上的,比较符合要求
五、依赖的包
mysql_connector_repackaged
elasticsearch
Pillow
tensorflow
requests
pandas
Keras
numpy
收起阅读 »

社区日报 第882期 (2020-03-15)
http://t.cn/A6zakWIP
2.ELK Stack教程–有效地发现,分析和可视化您的数据。
http://t.cn/RTqOKYy
3.(自备梯子)人们实际编写的32+条有趣的代码注释。
http://t.cn/A6zSvmhc
编辑:至尊宝
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
http://t.cn/A6zakWIP
2.ELK Stack教程–有效地发现,分析和可视化您的数据。
http://t.cn/RTqOKYy
3.(自备梯子)人们实际编写的32+条有趣的代码注释。
http://t.cn/A6zSvmhc
编辑:至尊宝
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

社区日报 第881期 (2020-03-14)

社区日报 第880期 (2020-03-13)
http://t.cn/A6zI6iOI
2、使用Spark和ES-Hadoop在Elastic Search上进行机器学习(梯子)
http://t.cn/A6zI6axs
3、Elasticsearch中的地理空间查询(梯子)
http://t.cn/A6zI6Kwp
编辑:铭毅天下
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
http://t.cn/A6zI6iOI
2、使用Spark和ES-Hadoop在Elastic Search上进行机器学习(梯子)
http://t.cn/A6zI6axs
3、Elasticsearch中的地理空间查询(梯子)
http://t.cn/A6zI6Kwp
编辑:铭毅天下
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

社区日报 第879期 (2020-03-12)
http://t.cn/A6z5WgQt
2.Elasticsearch 平滑下线节点实践指南
http://t.cn/A6z5WsPL
3.如何使用Elasticsearch实现对动态字段的搜索
http://t.cn/A6z5lhdO
编辑:金桥
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
http://t.cn/A6z5WgQt
2.Elasticsearch 平滑下线节点实践指南
http://t.cn/A6z5WsPL
3.如何使用Elasticsearch实现对动态字段的搜索
http://t.cn/A6z5lhdO
编辑:金桥
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

社区日报 第878期 (2020-03-11)
http://t.cn/A6zbKLpy
2、腾讯健康码16亿亮码背后的 Elasticsearch 系统调优实践
http://t.cn/A6zbKcx2
3、Elasticsearch 应用场景之 cross_fields
http://t.cn/A67FEMGY
编辑:江水
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
http://t.cn/A6zbKLpy
2、腾讯健康码16亿亮码背后的 Elasticsearch 系统调优实践
http://t.cn/A6zbKcx2
3、Elasticsearch 应用场景之 cross_fields
http://t.cn/A67FEMGY
编辑:江水
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
收起阅读 »

一种处理Elasticsearch对象数组类型的方式
目前情况
Elasticsearch中处理对象数组有两种格式array和nested,但这两种都有一定的不足。
以下面的文档为例:
{
"user": [
{
"first": "John",
"last": "Smith"
},
{
"first": "Alice",
"last": "White"
}
]
}
如果在mapping中以array存储,那么实际存储为:
user.first:["John","Alice"]
user.last:["Smith","White"]
如果以must的方式查询user.first:John
和user.last:White
,那么这篇文档也会命中,这不是我们期望的。
如果在mapping中以array存储,Elasticsearch将每个对象视为一个doc,这例子会存储3个doc,会严重影响ES写入和查询的效率。
Flatten格式
我想到的存储方式很简单,就是将对象数组打平保存为一个keyword
类型的字符串数组,故起名Flatten格式。
以上面文档为例,数组对象需要转换为下面的格式
"user.flatten": [
"first:John",
"last:Smith",
"first:John&last:Smith",
"first:Alice",
"last:White",
"first:Alice&last:White"
]
这样以must的方式查询user.first:John
和user.last:White
,可以转换为term查询first:John&last:White
,并不会命中文档。
同时,这种方式还是保存1个doc,避免了nested的缺点。
对于flatten格式有几点说明
user.flatten数组的大小
如果user对象个数为M,user属性个数为N,那么其数组大小为(2^N-1)*M
。
对象为空怎么处理
建议以null
方式保存,例如:
{
"first": "John",
"last": null
}
转换后的格式
[
"first:John",
"last:null",
"first:John&last:null",
]
保存和查询对于对象属性的处理顺序要保持一致
上述例子都是按first&last
顺序存储的,那么以must的方式查询user.first:John
和user.last:White
也要以first:John&last:White
方式查询,不能用last:White&first:John
。
不足
- 需要自己编码将JSON对象转换为字符串数组
- 需要自己编码转换查询语句
- 只支持term查询
目前情况
Elasticsearch中处理对象数组有两种格式array和nested,但这两种都有一定的不足。
以下面的文档为例:
{
"user": [
{
"first": "John",
"last": "Smith"
},
{
"first": "Alice",
"last": "White"
}
]
}
如果在mapping中以array存储,那么实际存储为:
user.first:["John","Alice"]
user.last:["Smith","White"]
如果以must的方式查询user.first:John
和user.last:White
,那么这篇文档也会命中,这不是我们期望的。
如果在mapping中以array存储,Elasticsearch将每个对象视为一个doc,这例子会存储3个doc,会严重影响ES写入和查询的效率。
Flatten格式
我想到的存储方式很简单,就是将对象数组打平保存为一个keyword
类型的字符串数组,故起名Flatten格式。
以上面文档为例,数组对象需要转换为下面的格式
"user.flatten": [
"first:John",
"last:Smith",
"first:John&last:Smith",
"first:Alice",
"last:White",
"first:Alice&last:White"
]
这样以must的方式查询user.first:John
和user.last:White
,可以转换为term查询first:John&last:White
,并不会命中文档。
同时,这种方式还是保存1个doc,避免了nested的缺点。
对于flatten格式有几点说明
user.flatten数组的大小
如果user对象个数为M,user属性个数为N,那么其数组大小为(2^N-1)*M
。
对象为空怎么处理
建议以null
方式保存,例如:
{
"first": "John",
"last": null
}
转换后的格式
[
"first:John",
"last:null",
"first:John&last:null",
]
保存和查询对于对象属性的处理顺序要保持一致
上述例子都是按first&last
顺序存储的,那么以must的方式查询user.first:John
和user.last:White
也要以first:John&last:White
方式查询,不能用last:White&first:John
。
不足
- 需要自己编码将JSON对象转换为字符串数组
- 需要自己编码转换查询语句
- 只支持term查询