愚者求师之过,智者从师之长。

社区日报 第894期 (2020-03-27)

1、赶紧上车!!2020-3-28 pm3:00Elastic 中文社区线上直播活动预告! 
https://t.cn/A6Z43Fpb
2、Elasticsearch Java Rest Client 上手指南
https://t.cn/A6Z4n0Jc
3、Logstash 实现关系型数据库与 ElasticSearch 之间的数据同步
https://t.cn/A6Z4nTFc
4、Elasticsearch 插件开发简明指南
https://t.cn/A6Z4nRZh

编辑:铭毅天下
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
 

 
继续阅读 »
1、赶紧上车!!2020-3-28 pm3:00Elastic 中文社区线上直播活动预告! 
https://t.cn/A6Z43Fpb
2、Elasticsearch Java Rest Client 上手指南
https://t.cn/A6Z4n0Jc
3、Logstash 实现关系型数据库与 ElasticSearch 之间的数据同步
https://t.cn/A6Z4nTFc
4、Elasticsearch 插件开发简明指南
https://t.cn/A6Z4nRZh

编辑:铭毅天下
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
 

  收起阅读 »

社区日报 第893期 (2020-03-26)

1.通过Workplace Search搜索github提高开发效率
http://t.cn/A6ZAL3os
2.Filebeats实现原理剖析
http://t.cn/A6ZALruF
3.使用logstash作为docker日志驱动收集日志
http://t.cn/A6hkYz8m

编辑:金桥
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
继续阅读 »
1.通过Workplace Search搜索github提高开发效率
http://t.cn/A6ZAL3os
2.Filebeats实现原理剖析
http://t.cn/A6ZALruF
3.使用logstash作为docker日志驱动收集日志
http://t.cn/A6hkYz8m

编辑:金桥
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

社区日报 第892期 (2020-03-25)

1.基于 MySQL Binlog 的 Elasticsearch 数据同步实践
http://t.cn/A6Zz4U5b
2.Solr vs ElasticSearch 搜索技术哪家强
http://t.cn/A6Zz4VyN
3.Elasticsearch 中 Delete index 会导致节点离线吗
https://t.cn/A6ZzVMay
 
社区线上直播系列活动第一期:存储与计算分离,京东在 Elasticsearch 上的实践:
http://t.cn/A6zeV2Hv
 
编辑:江水
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
 
继续阅读 »
1.基于 MySQL Binlog 的 Elasticsearch 数据同步实践
http://t.cn/A6Zz4U5b
2.Solr vs ElasticSearch 搜索技术哪家强
http://t.cn/A6Zz4VyN
3.Elasticsearch 中 Delete index 会导致节点离线吗
https://t.cn/A6ZzVMay
 
社区线上直播系列活动第一期:存储与计算分离,京东在 Elasticsearch 上的实践:
http://t.cn/A6zeV2Hv
 
编辑:江水
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
  收起阅读 »

社区日报 第891期 (2020-03-24)

1、为Elasticsearch启动https访问。
http://t.cn/A6zsN39a
2、elasticsearch painless脚本使用。
http://t.cn/A6zsN1NX
3、Elasticsearch和Lucene的关系介绍。
http://t.cn/A6zsNBg0

编辑:叮咚光军
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
继续阅读 »
1、为Elasticsearch启动https访问。
http://t.cn/A6zsN39a
2、elasticsearch painless脚本使用。
http://t.cn/A6zsN1NX
3、Elasticsearch和Lucene的关系介绍。
http://t.cn/A6zsNBg0

编辑:叮咚光军
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub 收起阅读 »

社区日报 第890期 (2020-03-23)

1.ElasticSearch分片不均匀,集群负载不均衡问题解决
http://t.cn/A6zdswBR
2.elasticsearch 6.x 升级调研报告
http://t.cn/A6zgvvMR
3.Elasticsearch Machine Learning AIOps 实践
http://t.cn/A6zgvkek

社区线上直播系列活动第一期:存储与计算分离,京东在 Elasticsearch 上的实践:
http://t.cn/A6zeV2Hv

编辑:cyberdak
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
继续阅读 »
1.ElasticSearch分片不均匀,集群负载不均衡问题解决
http://t.cn/A6zdswBR
2.elasticsearch 6.x 升级调研报告
http://t.cn/A6zgvvMR
3.Elasticsearch Machine Learning AIOps 实践
http://t.cn/A6zgvkek

社区线上直播系列活动第一期:存储与计算分离,京东在 Elasticsearch 上的实践:
http://t.cn/A6zeV2Hv

编辑:cyberdak
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

社区日报 第889期 (2020-03-22)

1.使用Nginx Basic Auth保护Kibana。
http://t.cn/A6zrXWFJ
2.在启用了X-Pack安全性的Docker上设置Elasticsearch和Kibana。
http://t.cn/A6zr0P8I
3.(自备梯子)冠状病毒:锤子和舞蹈。
http://t.cn/A6zmXEr2

编辑:至尊宝
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
继续阅读 »
1.使用Nginx Basic Auth保护Kibana。
http://t.cn/A6zrXWFJ
2.在启用了X-Pack安全性的Docker上设置Elasticsearch和Kibana。
http://t.cn/A6zr0P8I
3.(自备梯子)冠状病毒:锤子和舞蹈。
http://t.cn/A6zmXEr2

编辑:至尊宝
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

社区日报 第888期 (2020-03-21)

1.es构建图数据库(需翻墙)

http://t.cn/R9Xgj2X

2.es从csv导入文件

http://t.cn/A6z1KUQk

3.es scala客户端

http://t.cn/Rj56raD

继续阅读 »

1.es构建图数据库(需翻墙)

http://t.cn/R9Xgj2X

2.es从csv导入文件

http://t.cn/A6z1KUQk

3.es scala客户端

http://t.cn/Rj56raD

收起阅读 »

社区日报 第887期 (2020-03-20)

1、图搜图搜索引擎实践
http://t.cn/A6zmLUGH
2、Elasticsearch 智能化运维思路
http://t.cn/A6zmLbAp
3、Python Elasticsearch DSL 使用笔记
http://t.cn/A6zmLGdf

编辑:铭毅天下
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
继续阅读 »
1、图搜图搜索引擎实践
http://t.cn/A6zmLUGH
2、Elasticsearch 智能化运维思路
http://t.cn/A6zmLbAp
3、Python Elasticsearch DSL 使用笔记
http://t.cn/A6zmLGdf

编辑:铭毅天下
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

社区日报 第885期 (2020-03-18)

1、Logstash grok filter 插件实战
http://t.cn/A6zYYqLW
2、滴滴离线索引快速构建 FastIndex 架构实践
http://t.cn/A6zYYITC
3、58同城 Elasticsearch 应用及平台建设实践
http://t.cn/A6zYYar0

编辑:江水
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
 
继续阅读 »
1、Logstash grok filter 插件实战
http://t.cn/A6zYYqLW
2、滴滴离线索引快速构建 FastIndex 架构实践
http://t.cn/A6zYYITC
3、58同城 Elasticsearch 应用及平台建设实践
http://t.cn/A6zYYar0

编辑:江水
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
  收起阅读 »

社区日报 第886期 (2020-03-19)

1.在Kibana中定制Regional Map
http://t.cn/A6zHpmzj
2.Elasticsearch和MySQL查询原理分析与对比
http://t.cn/A6zHpu1E
3.Spark SQL读写 ES7.x 及问题总结
http://t.cn/A6zHp1qi

编辑:金桥
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
继续阅读 »
1.在Kibana中定制Regional Map
http://t.cn/A6zHpmzj
2.Elasticsearch和MySQL查询原理分析与对比
http://t.cn/A6zHpu1E
3.Spark SQL读写 ES7.x 及问题总结
http://t.cn/A6zHp1qi

编辑:金桥
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

【哔哩哔哩】上海 B站招聘ES/Solr/Lucene/OLAP工程师

工作地:上海
联系电话/微信:17621966518

希望你具有如下特征:
* 熟悉Lucene源码,或熟悉其他OLAP开源组件
* 具有强大的编码能力
工作地:上海
联系电话/微信:17621966518

希望你具有如下特征:
* 熟悉Lucene源码,或熟悉其他OLAP开源组件
* 具有强大的编码能力

社区日报 第883期 (2020-03-16)

1.Elasticsearch 索引设计实战指南
http://t.cn/A6zNsYBp
2.探究 | Elasticsearch 与传统数据库界限
http://t.cn/A6zNsetv
3.分布式日志链路追踪:skywalking+elasticsearch部署实践(5.x)
http://t.cn/A6zp7b4T

编辑:wt
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
继续阅读 »
1.Elasticsearch 索引设计实战指南
http://t.cn/A6zNsYBp
2.探究 | Elasticsearch 与传统数据库界限
http://t.cn/A6zNsetv
3.分布式日志链路追踪:skywalking+elasticsearch部署实践(5.x)
http://t.cn/A6zp7b4T

编辑:wt
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

基于ES的aliyun-knn插件,开发的以图搜图搜索引擎

基于ES的aliyun-knn插件,开发的以图搜图搜索引擎

本例是基于Elasticsearch6.7 版本, 安装了aliyun-knn插件;设计的图片向量特征为512维度.
如果自建ES,是无法使用aliyun-knn插件的,自建建议使用ES7.x版本,并按照fast-elasticsearch-vector-scoring插件(https://github.com/lior-k/fast-elasticsearch-vector-scoring/)

由于我的python水平有限,文中设计到的图片特征提取,使用了yongyuan.name的VGGNet库,再此表示感谢!

一、 ES设计

1.1 索引结构

# 创建一个图片索引
PUT images_v2
{
  "aliases": {
    "images": {}
  }, 
  "settings": {
    "index.codec": "proxima",
    "index.vector.algorithm": "hnsw",
    "index.number_of_replicas":1,
    "index.number_of_shards":3
  },
  "mappings": {
    "_doc": {
      "properties": {
        "feature": {
          "type": "proxima_vector",
          "dim": 512
        },
        "relation_id": {
          "type": "keyword"
        },
        "image_path": {
          "type": "keyword"
        }
      }
    }
  }
}

1.2 DSL语句

GET images/_search
{
  "query": {
    "hnsw": {
      "feature": {
        "vector": [255,....255],
        "size": 3,
        "ef": 1
      }
    }
  },
  "from": 0,
  "size": 20, 
  "sort": [
    {
      "_score": {
        "order": "desc"
      }
    }
  ], 
  "collapse": {
    "field": "relation_id"
  },
  "_source": {
    "includes": [
      "relation_id",
      "image_path"
    ]
  }
}

二、图片特征

extract_cnn_vgg16_keras.py

# -*- coding: utf-8 -*-
# Author: yongyuan.name
import numpy as np
from numpy import linalg as LA
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
class VGGNet:
    def __init__(self):
        # weights: 'imagenet'
        # pooling: 'max' or 'avg'
        # input_shape: (width, height, 3), width and height should >= 48
        self.input_shape = (224, 224, 3)
        self.weight = 'imagenet'
        self.pooling = 'max'
        self.model = VGG16(weights = self.weight, input_shape = (self.input_shape[0], self.input_shape[1], self.input_shape[2]), pooling = self.pooling, include_top = False)
        self.model.predict(np.zeros((1, 224, 224 , 3)))
    '''
    Use vgg16 model to extract features
    Output normalized feature vector
    '''
    def extract_feat(self, img_path):
        img = image.load_img(img_path, target_size=(self.input_shape[0], self.input_shape[1]))
        img = image.img_to_array(img)
        img = np.expand_dims(img, axis=0)
        img = preprocess_input(img)
        feat = self.model.predict(img)
        norm_feat = feat[0]/LA.norm(feat[0])
        return norm_feat
# 获取图片特征
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
file_path = "./demo.jpg"
queryVec = model.extract_feat(file_path)
feature = queryVec.tolist()

三、将图片特征写入ES

helper.py

import re
import urllib.request
def strip(path):
    """
    需要清洗的文件夹名字
    清洗掉Windows系统非法文件夹名字的字符串
    :param path:
    :return:
    """
    path = re.sub(r'[?\\*|“<>:/]', '', str(path))
    return path

def getfilename(url):
    """
    通过url获取最后的文件名
    :param url:
    :return:
    """
    filename = url.split('/')[-1]
    filename = strip(filename)
    return filename

def urllib_download(url, filename):
    """
    下载
    :param url:
    :param filename:
    :return:
    """
    return urllib.request.urlretrieve(url, filename)

train.py

# coding=utf-8
import mysql.connector
import os
from helper import urllib_download, getfilename
from elasticsearch5 import Elasticsearch, helpers
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
http_auth = ("elastic", "123455")
es = Elasticsearch("http://127.0.0.1:9200", http_auth=http_auth)
mydb = mysql.connector.connect(
    host="127.0.0.1",  # 数据库主机地址
    user="root",  # 数据库用户名
    passwd="123456",  # 数据库密码
    database="images"
)
mycursor = mydb.cursor()
imgae_path = "./images/"
def get_data(page=1):
    page_size = 20
    offset = (page - 1) * page_size
    sql = """
    SELECT id, relation_id, photo FROM  images  LIMIT {0},{1}
    """
    mycursor.execute(sql.format(offset, page_size))
    myresult = mycursor.fetchall()
    return myresult

def train_image_feature(myresult):
    indexName = "images"
    photo_path = "http://域名/{0}"
    actions = []
    for x in myresult:
            id = str(x[0])
    relation_id = x[1]
    # photo = x[2].decode(encoding="utf-8")
    photo = x[2]
    full_photo = photo_path.format(photo)
    filename = imgae_path + getfilename(full_photo)
    if not os.path.exists(filename):
        try:
            urllib_download(full_photo, filename)
        except BaseException as e:
            print("gid:{0}的图片{1}未能下载成功".format(gid, full_photo))
            continue
    if not os.path.exists(filename):
         continue
    try:
        feature = model.extract_feat(filename).tolist()
        action = {
        "_op_type": "index",
        "_index": indexName,
        "_type": "_doc",
        "_id": id,
        "_source": {
                            "relation_id": relation_id,
                            "feature": feature,
                            "image_path": photo
        }
        }
        actions.append(action)
    except BaseException as e:
        print("id:{0}的图片{1}未能获取到特征".format(id, full_photo))
        continue
    # print(actions)
    succeed_num = 0
    for ok, response in helpers.streaming_bulk(es, actions):
        if not ok:
            print(ok)
            print(response)
        else:
            succeed_num += 1
            print("本次更新了{0}条数据".format(succeed_num))
            es.indices.refresh(indexName)

page = 1
while True:
    print("当前第{0}页".format(page))
    myresult = get_data(page=page)
    if not myresult:
        print("没有获取到数据了,退出")
        break
    train_image_feature(myresult)
    page += 1

四、搜索图片

import requests
import json
import os
import time
from elasticsearch5 import Elasticsearch
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
http_auth = ("elastic", "123455")
es = Elasticsearch("http://127.0.0.1:9200", http_auth=http_auth)
#上传图片保存
upload_image_path = "./runtime/"
upload_image = request.files.get("image")
upload_image_type = upload_image.content_type.split('/')[-1]
file_name = str(time.time())[:10] + '.' + upload_image_type
file_path = upload_image_path + file_name
upload_image.save(file_path)
# 计算图片特征向量
queryVec = model.extract_feat(file_path)
feature = queryVec.tolist()
# 删除图片
os.remove(file_path)
# 根据特征向量去ES中搜索
body = {
    "query": {
        "hnsw": {
            "feature": {
                "vector": feature,
                "size": 5,
                "ef": 10
            }
        }
    },
    # "collapse": {
    # "field": "relation_id"
    # },
    "_source": {"includes": ["relation_id", "image_path"]},
    "from": 0,
    "size": 40
}
indexName = "images"
res = es.search(indexName, body=body)
# 返回的结果,最好根据自身情况,将得分低的过滤掉...经过测试, 得分在0.65及其以上的,比较符合要求

五、依赖的包

mysql_connector_repackaged
elasticsearch
Pillow
tensorflow
requests
pandas
Keras
numpy
继续阅读 »

基于ES的aliyun-knn插件,开发的以图搜图搜索引擎

本例是基于Elasticsearch6.7 版本, 安装了aliyun-knn插件;设计的图片向量特征为512维度.
如果自建ES,是无法使用aliyun-knn插件的,自建建议使用ES7.x版本,并按照fast-elasticsearch-vector-scoring插件(https://github.com/lior-k/fast-elasticsearch-vector-scoring/)

由于我的python水平有限,文中设计到的图片特征提取,使用了yongyuan.name的VGGNet库,再此表示感谢!

一、 ES设计

1.1 索引结构

# 创建一个图片索引
PUT images_v2
{
  "aliases": {
    "images": {}
  }, 
  "settings": {
    "index.codec": "proxima",
    "index.vector.algorithm": "hnsw",
    "index.number_of_replicas":1,
    "index.number_of_shards":3
  },
  "mappings": {
    "_doc": {
      "properties": {
        "feature": {
          "type": "proxima_vector",
          "dim": 512
        },
        "relation_id": {
          "type": "keyword"
        },
        "image_path": {
          "type": "keyword"
        }
      }
    }
  }
}

1.2 DSL语句

GET images/_search
{
  "query": {
    "hnsw": {
      "feature": {
        "vector": [255,....255],
        "size": 3,
        "ef": 1
      }
    }
  },
  "from": 0,
  "size": 20, 
  "sort": [
    {
      "_score": {
        "order": "desc"
      }
    }
  ], 
  "collapse": {
    "field": "relation_id"
  },
  "_source": {
    "includes": [
      "relation_id",
      "image_path"
    ]
  }
}

二、图片特征

extract_cnn_vgg16_keras.py

# -*- coding: utf-8 -*-
# Author: yongyuan.name
import numpy as np
from numpy import linalg as LA
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
class VGGNet:
    def __init__(self):
        # weights: 'imagenet'
        # pooling: 'max' or 'avg'
        # input_shape: (width, height, 3), width and height should >= 48
        self.input_shape = (224, 224, 3)
        self.weight = 'imagenet'
        self.pooling = 'max'
        self.model = VGG16(weights = self.weight, input_shape = (self.input_shape[0], self.input_shape[1], self.input_shape[2]), pooling = self.pooling, include_top = False)
        self.model.predict(np.zeros((1, 224, 224 , 3)))
    '''
    Use vgg16 model to extract features
    Output normalized feature vector
    '''
    def extract_feat(self, img_path):
        img = image.load_img(img_path, target_size=(self.input_shape[0], self.input_shape[1]))
        img = image.img_to_array(img)
        img = np.expand_dims(img, axis=0)
        img = preprocess_input(img)
        feat = self.model.predict(img)
        norm_feat = feat[0]/LA.norm(feat[0])
        return norm_feat
# 获取图片特征
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
file_path = "./demo.jpg"
queryVec = model.extract_feat(file_path)
feature = queryVec.tolist()

三、将图片特征写入ES

helper.py

import re
import urllib.request
def strip(path):
    """
    需要清洗的文件夹名字
    清洗掉Windows系统非法文件夹名字的字符串
    :param path:
    :return:
    """
    path = re.sub(r'[?\\*|“<>:/]', '', str(path))
    return path

def getfilename(url):
    """
    通过url获取最后的文件名
    :param url:
    :return:
    """
    filename = url.split('/')[-1]
    filename = strip(filename)
    return filename

def urllib_download(url, filename):
    """
    下载
    :param url:
    :param filename:
    :return:
    """
    return urllib.request.urlretrieve(url, filename)

train.py

# coding=utf-8
import mysql.connector
import os
from helper import urllib_download, getfilename
from elasticsearch5 import Elasticsearch, helpers
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
http_auth = ("elastic", "123455")
es = Elasticsearch("http://127.0.0.1:9200", http_auth=http_auth)
mydb = mysql.connector.connect(
    host="127.0.0.1",  # 数据库主机地址
    user="root",  # 数据库用户名
    passwd="123456",  # 数据库密码
    database="images"
)
mycursor = mydb.cursor()
imgae_path = "./images/"
def get_data(page=1):
    page_size = 20
    offset = (page - 1) * page_size
    sql = """
    SELECT id, relation_id, photo FROM  images  LIMIT {0},{1}
    """
    mycursor.execute(sql.format(offset, page_size))
    myresult = mycursor.fetchall()
    return myresult

def train_image_feature(myresult):
    indexName = "images"
    photo_path = "http://域名/{0}"
    actions = []
    for x in myresult:
            id = str(x[0])
    relation_id = x[1]
    # photo = x[2].decode(encoding="utf-8")
    photo = x[2]
    full_photo = photo_path.format(photo)
    filename = imgae_path + getfilename(full_photo)
    if not os.path.exists(filename):
        try:
            urllib_download(full_photo, filename)
        except BaseException as e:
            print("gid:{0}的图片{1}未能下载成功".format(gid, full_photo))
            continue
    if not os.path.exists(filename):
         continue
    try:
        feature = model.extract_feat(filename).tolist()
        action = {
        "_op_type": "index",
        "_index": indexName,
        "_type": "_doc",
        "_id": id,
        "_source": {
                            "relation_id": relation_id,
                            "feature": feature,
                            "image_path": photo
        }
        }
        actions.append(action)
    except BaseException as e:
        print("id:{0}的图片{1}未能获取到特征".format(id, full_photo))
        continue
    # print(actions)
    succeed_num = 0
    for ok, response in helpers.streaming_bulk(es, actions):
        if not ok:
            print(ok)
            print(response)
        else:
            succeed_num += 1
            print("本次更新了{0}条数据".format(succeed_num))
            es.indices.refresh(indexName)

page = 1
while True:
    print("当前第{0}页".format(page))
    myresult = get_data(page=page)
    if not myresult:
        print("没有获取到数据了,退出")
        break
    train_image_feature(myresult)
    page += 1

四、搜索图片

import requests
import json
import os
import time
from elasticsearch5 import Elasticsearch
from extract_cnn_vgg16_keras import VGGNet
model = VGGNet()
http_auth = ("elastic", "123455")
es = Elasticsearch("http://127.0.0.1:9200", http_auth=http_auth)
#上传图片保存
upload_image_path = "./runtime/"
upload_image = request.files.get("image")
upload_image_type = upload_image.content_type.split('/')[-1]
file_name = str(time.time())[:10] + '.' + upload_image_type
file_path = upload_image_path + file_name
upload_image.save(file_path)
# 计算图片特征向量
queryVec = model.extract_feat(file_path)
feature = queryVec.tolist()
# 删除图片
os.remove(file_path)
# 根据特征向量去ES中搜索
body = {
    "query": {
        "hnsw": {
            "feature": {
                "vector": feature,
                "size": 5,
                "ef": 10
            }
        }
    },
    # "collapse": {
    # "field": "relation_id"
    # },
    "_source": {"includes": ["relation_id", "image_path"]},
    "from": 0,
    "size": 40
}
indexName = "images"
res = es.search(indexName, body=body)
# 返回的结果,最好根据自身情况,将得分低的过滤掉...经过测试, 得分在0.65及其以上的,比较符合要求

五、依赖的包

mysql_connector_repackaged
elasticsearch
Pillow
tensorflow
requests
pandas
Keras
numpy
收起阅读 »

社区日报 第882期 (2020-03-15)

1.PYTHON FLASK CELERY + ELK。
http://t.cn/A6zakWIP
2.ELK Stack教程–有效地发现,分析和可视化您的数据。
http://t.cn/RTqOKYy
3.(自备梯子)人们实际编写的32+条有趣的代码注释。
http://t.cn/A6zSvmhc

编辑:至尊宝
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup
继续阅读 »
1.PYTHON FLASK CELERY + ELK。
http://t.cn/A6zakWIP
2.ELK Stack教程–有效地发现,分析和可视化您的数据。
http://t.cn/RTqOKYy
3.(自备梯子)人们实际编写的32+条有趣的代码注释。
http://t.cn/A6zSvmhc

编辑:至尊宝
归档:https://ela.st/cn-daily-all
订阅:https://ela.st/cn-daily-sub
沙龙:https://ela.st/cn-meetup 收起阅读 »

社区日报 第881期 (2020-03-14)

1.php操作es例子

http://t.cn/A6z66a9g

2.lucene8.2.0底层架构-ByteBlockPool结构分析

http://t.cn/A6z66a9d

3.es中search_as_you_type和Context Suggester字段区别

http://t.cn/A6z66a9r

继续阅读 »

1.php操作es例子

http://t.cn/A6z66a9g

2.lucene8.2.0底层架构-ByteBlockPool结构分析

http://t.cn/A6z66a9d

3.es中search_as_you_type和Context Suggester字段区别

http://t.cn/A6z66a9r

收起阅读 »