悟空,拿我的打狗棒来

怎么聚合 搜索结果中 inner_hits数据,求围观。

回复

hubiao 回复了问题 • 2 人关注 • 1 个回复 • 1459 次浏览 • 2020-03-19 14:32 • 来自相关话题

expression脚本条件更新

liuxg 回复了问题 • 3 人关注 • 1 个回复 • 1399 次浏览 • 2020-03-19 14:52 • 来自相关话题

es自定义排序错乱问题

God_lockin 回复了问题 • 6 人关注 • 4 个回复 • 4764 次浏览 • 2020-03-24 23:52 • 来自相关话题

批量写入多个index索引文档会生成一个段还是每个文档生成一个段或者相同index的文档会生成一个段?

mashuai 回复了问题 • 3 人关注 • 2 个回复 • 1415 次浏览 • 2020-03-19 17:32 • 来自相关话题

es GET index/_count转化为restHighLevelClient怎么写?

God_lockin 回复了问题 • 3 人关注 • 2 个回复 • 7851 次浏览 • 2020-03-24 23:17 • 来自相关话题

添加单条数据时检查重复,重复就更新,不重复就添加

tacsklet 回复了问题 • 2 人关注 • 1 个回复 • 3078 次浏览 • 2020-03-23 17:51 • 来自相关话题

org.elasticsearch.transport.RemoteTransportException

回复

damon10244201 发起了问题 • 1 人关注 • 0 个回复 • 4067 次浏览 • 2020-03-18 15:57 • 来自相关话题

如何使用分词器提取地名的字段

回复

bepatience 发起了问题 • 1 人关注 • 0 个回复 • 2244 次浏览 • 2020-03-18 12:53 • 来自相关话题

ES 均衡分片为什么不先同步节点数分片较少的?

回复

zhangrui90 发起了问题 • 1 人关注 • 0 个回复 • 2060 次浏览 • 2020-03-17 18:02 • 来自相关话题

elasticsearch range查询 用 from to跟lt gt有什么区别?

回复

s60514 发起了问题 • 1 人关注 • 0 个回复 • 9340 次浏览 • 2020-03-17 14:53 • 来自相关话题

Elasticsearch 建立链接搜索时遇到 None of the configured nodes were available

tacsklet 回复了问题 • 2 人关注 • 1 个回复 • 1599 次浏览 • 2020-03-17 10:21 • 来自相关话题

为什么es写数据是先发请求到primary shard,再将请求转给replica shard

caizhongao 回复了问题 • 7 人关注 • 5 个回复 • 3862 次浏览 • 2020-03-20 20:40 • 来自相关话题

基于ES的aliyun-knn插件,开发的以图搜图搜索引擎

发表了文章 • 1 个评论 • 9736 次浏览 • 2020-03-15 12:47 • 来自相关话题

基于ES的aliyun-knn插件,开发的以图搜图搜索引擎

<br /> 本例是基于Elasticsearch6.7 版本, 安装了aliyun-knn插件;设计的图片向量特征为512维度.<br /> 如果自建ES,是无法使用aliyun-knn插件的,自建建议使用ES7.x版本,并按照fast-elasticsearch-vector-scoring插件(<a href="https://github.com/lior-k/fast-elasticsearch-vector-scoring" rel="nofollow" target="_blank">https://github.com/lior-k/fast ... oring</a>/)<br />

由于我的python水平有限,文中设计到的图片特征提取,使用了yongyuan.name的VGGNet库,再此表示感谢!

一、 ES设计

1.1 索引结构

```json

创建一个图片索引

PUT images_v2
{
"aliases": {
"images": {}
},
"settings": {
"index.codec": "proxima",
"index.vector.algorithm": "hnsw",
"index.number_of_replicas":1,
"index.number_of_shards":3
},
"mappings": {
"_doc": {
"properties": {
"feature": {
"type": "proxima_vector",
"dim": 512
},
"relation_id": {
"type": "keyword"
},
"image_path": {
"type": "keyword"
}
}
}
}
}
```

1.2 DSL语句

<br /> GET images/_search<br /> {<br /> "query": {<br /> "hnsw": {<br /> "feature": {<br /> "vector": [255,....255],<br /> "size": 3,<br /> "ef": 1<br /> }<br /> }<br /> },<br /> "from": 0,<br /> "size": 20, <br /> "sort": [<br /> {<br /> "_score": {<br /> "order": "desc"<br /> }<br /> }<br /> ], <br /> "collapse": {<br /> "field": "relation_id"<br /> },<br /> "_source": {<br /> "includes": [<br /> "relation_id",<br /> "image_path"<br /> ]<br /> }<br /> }<br />


二、图片特征

extract_cnn_vgg16_keras.py
```python

-- coding: utf-8 --

Author: yongyuan.name

import numpy as np
from numpy import linalg as LA
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
class VGGNet:
def init(self):

weights: 'imagenet'

    # pooling: 'max' or 'avg'<br />
    # input_shape: (width, height, 3), width and height should >= 48<br />
    self.input_shape = (224, 224, 3)<br />
    self.weight = 'imagenet'<br />
    self.pooling = 'max'<br />
    self.model = VGG16(weights = self.weight, input_shape = (self.input_shape[0], self.input_shape[1], self.input_shape[2]), pooling = self.pooling, include_top = False)<br />
    self.model.predict(np.zeros((1, 224, 224 , 3)))<br />
'''<br />
Use vgg16 model to extract features<br />
Output normalized feature vector<br />
'''<br />
def extract_feat(self, img_path):<br />
    img = image.load_img(img_path, target_size=(self.input_shape[0], self.input_shape[1]))<br />
    img = image.img_to_array(img)<br />
    img = np.expand_dims(img, axis=0)<br />
    img = preprocess_input(img)<br />
    feat = self.model.predict(img)<br />
    norm_feat = feat[0]/LA.norm(feat[0])<br />
    return norm_feat<br />

<br /> <br /> <br /> python

获取图片特征

from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
file_path = "./demo.jpg"
queryVec = model.extract_feat(file_path)
feature = queryVec.tolist()
```


三、将图片特征写入ES

helper.py
python<br /> import re<br /> import urllib.request<br /> def strip(path):<br /> """<br /> 需要清洗的文件夹名字<br /> 清洗掉Windows系统非法文件夹名字的字符串<br /> :param path:<br /> :return:<br /> """<br /> path = re.sub(r'[?\\*|“<>:/]', '', str(path))<br /> return path<br /> <br /> def getfilename(url):<br /> """<br /> 通过url获取最后的文件名<br /> :param url:<br /> :return:<br /> """<br /> filename = url.split('/')[-1]<br /> filename = strip(filename)<br /> return filename<br /> <br /> def urllib_download(url, filename):<br /> """<br /> 下载<br /> :param url:<br /> :param filename:<br /> :return:<br /> """<br /> return urllib.request.urlretrieve(url, filename)<br />


train.py
```python

coding=utf-8

import mysql.connector
import os
from helper import urllib_download, getfilename
from elasticsearch5 import Elasticsearch, helpers
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
http_auth = ("elastic", "123455")
es = Elasticsearch("<a href="http://127.0.0.1:9200"" rel="nofollow" target="_blank">http://127.0.0.1:9200", http_auth=http_auth)
mydb = mysql.connector.connect(
host="127.0.0.1", # 数据库主机地址
user="root", # 数据库用户名
passwd="123456", # 数据库密码
database="images"
)
mycursor = mydb.cursor()
imgae_path = "./images/"
def get_data(page=1):
page_size = 20
offset = (page - 1) * page_size
sql = """
SELECT id, relation_id, photo FROM images LIMIT {0},{1}
"""
mycursor.execute(sql.format(offset, page_size))
myresult = mycursor.fetchall()
return myresult

def train_image_feature(myresult):
indexName = "images"
photo_path = "http://域名/{0}"
actions = []
for x in myresult:
id = str(x[0])
relation_id = x[1]

photo = x[2].decode(encoding="utf-8")

photo = x[2]<br />
full_photo = photo_path.format(photo)<br />
filename = imgae_path + getfilename(full_photo)<br />
if not os.path.exists(filename):<br />
    try:<br />
        urllib_download(full_photo, filename)<br />
    except BaseException as e:<br />
        print("gid:{0}的图片{1}未能下载成功".format(gid, full_photo))<br />
        continue<br />
if not os.path.exists(filename):<br />
     continue<br />
try:<br />
    feature = model.extract_feat(filename).tolist()<br />
    action = {<br />
    "_op_type": "index",<br />
    "_index": indexName,<br />
    "_type": "_doc",<br />
    "_id": id,<br />
    "_source": {<br />
                        "relation_id": relation_id,<br />
                        "feature": feature,<br />
                        "image_path": photo<br />
    }<br />
    }<br />
    actions.append(action)<br />
except BaseException as e:<br />
    print("id:{0}的图片{1}未能获取到特征".format(id, full_photo))<br />
    continue<br />
# print(actions)<br />
succeed_num = 0<br />
for ok, response in helpers.streaming_bulk(es, actions):<br />
    if not ok:<br />
        print(ok)<br />
        print(response)<br />
    else:<br />
        succeed_num += 1<br />
        print("本次更新了{0}条数据".format(succeed_num))<br />
        es.indices.refresh(indexName)<br />


page = 1
while True:
print("当前第{0}页".format(page))
myresult = get_data(page=page)
if not myresult:
print("没有获取到数据了,退出")
break
train_image_feature(myresult)
page += 1
```

四、搜索图片

```python
import requests
import json
import os
import time
from elasticsearch5 import Elasticsearch
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
http_auth = ("elastic", "123455")
es = Elasticsearch("<a href="http://127.0.0.1:9200"" rel="nofollow" target="_blank">http://127.0.0.1:9200", http_auth=http_auth)

上传图片保存

upload_image_path = "./runtime/"
upload_image = request.files.get("image")
upload_image_type = upload_image.content_type.split('/')[-1]
file_name = str(time.time())[:10] + '.' + upload_image_type
file_path = upload_image_path + file_name
upload_image.save(file_path)

计算图片特征向量

queryVec = model.extract_feat(file_path)
feature = queryVec.tolist()

删除图片

os.remove(file_path)

根据特征向量去ES中搜索

body = {
"query": {
"hnsw": {
"feature": {
"vector": feature,
"size": 5,
"ef": 10
}
}
},

"collapse": {

# "field": "relation_id"<br />
# },<br />
"_source": {"includes": ["relation_id", "image_path"]},<br />
"from": 0,<br />
"size": 40<br />

}
indexName = "images"
res = es.search(indexName, body=body)

返回的结果,最好根据自身情况,将得分低的过滤掉...经过测试, 得分在0.65及其以上的,比较符合要求

```

五、依赖的包

```
mysql_connector_repackaged
elasticsearch
Pillow
tensorflow
requests
pandas
Keras
numpy

为什么连续发起查询时,第二次居然会变慢?

kennywu76 回复了问题 • 3 人关注 • 3 个回复 • 3406 次浏览 • 2020-03-18 11:12 • 来自相关话题

es开启_all字段,但是结果不是写入所有字段的内容值

回复

xiao 发起了问题 • 2 人关注 • 0 个回复 • 1907 次浏览 • 2020-03-14 20:14 • 来自相关话题