python - md5 id批量导入数据到elasticsearch时出现UnicodeDecodeError

标签 python elasticsearch unicode

我编写了一个简单的Python脚本,使用bulk API将数据导入elasticsearch。

# -*- encoding: utf-8 -*-
import csv
import datetime
import hashlib
from elasticsearch import Elasticsearch
from elasticsearch.helpers import bulk
from dateutil.relativedelta import relativedelta


ORIGINAL_FORMAT = '%y-%m-%d %H:%M:%S'
INDEX_PREFIX = 'my-log'
INDEX_DATE_FORMAT = '%Y-%m-%d'
FILE_ADDR = '/media/zeinab/ZiZi/Elastic/python/elastic-test/elasticsearch-import-data/sample_data/sample.csv'


def set_data(input_file):
    with open(input_file) as csvfile:
        reader = csv.DictReader(csvfile)
        for row in reader:
            sendtime = datetime.datetime.strptime(row['sendTime'].split('.')[0], ORIGINAL_FORMAT)

            yield {
                "_index": '{0}-{1}_{2}'.format(
                                        INDEX_PREFIX,
                                        sendtime.replace(day=1).strftime(INDEX_DATE_FORMAT),
                                        (sendtime.replace(day=1) + relativedelta(months=1)).strftime(INDEX_DATE_FORMAT)),
                "_type": 'data',
                '_id': hashlib.md5("{0}{1}{2}{3}{4}".format(sendtime, row['IMSI'], row['MSISDN'], int(row['ruleRef']), int(row['sponsorRef']))).digest(),
                "_source": {
                    'body': {
                        'status': int(row['status']),
                        'sendTime': sendtime
                    }
                }
            }


if __name__ == "__main__":
    es = Elasticsearch(['http://{0}:{1}'.format('my.host.ip.addr', 9200)])
    es.indices.delete(index='*')
    success, _ = bulk(es, set_data(FILE_ADDR))

This comment帮助我编写/使用 set_data 方法。

不幸的是我遇到了这个异常:

/usr/bin/python2.7 /media/zeinab/ZiZi/Elastic/python/elastic-test/elasticsearch-import-data/import_bulk_data.py
Traceback (most recent call last):
  File "/media/zeinab/ZiZi/Elastic/python/elastic-test/elasticsearch-import-data/import_bulk_data.py", line 59, in <module>
    success, _ = bulk(es, set_data(source_file))
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers/__init__.py", line 257, in bulk
    for ok, item in streaming_bulk(client, actions, **kwargs):
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers/__init__.py", line 180, in streaming_bulk
    client.transport.serializer):
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers/__init__.py", line 60, in _chunk_actions
    action = serializer.dumps(action)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/serializer.py", line 50, in dumps
    raise SerializationError(data, e)
elasticsearch.exceptions.SerializationError: ({u'index': {u'_type': 'data', u'_id': '8\x1dI\xa2\xe9\xa2H-\xa6\x0f\xbd=\xa7CY\xa3', u'_index': 'my-log-2017-04-01_2017-05-01'}}, UnicodeDecodeError('utf8', '8\x1dI\xa2\xe9\xa2H-\xa6\x0f\xbd=\xa7CY\xa3', 3, 4, 'invalid start byte'))

Process finished with exit code 1

我可以使用 index API 成功将此数据插入到 elasticsearch 中:

es.index(index='{0}-{1}_{2}'.format(
    INDEX_PREFIX,
    sendtime.replace(day=1).strftime(INDEX_DATE_FORMAT),
    (sendtime.replace(day=1) + relativedelta(months=1)).strftime(INDEX_DATE_FORMAT)
),
         doc_type='data',
         id=hashlib.md5("{0}{1}{2}{3}{4}".format(sendtime, row['IMSI'], row['MSISDN'], int(row['ruleRef']), int(row['sponsorRef']))).digest(),
         body={
                'status': int(row['status']),
                'sendTime': sendtime
            }
         )

但是 index API 的问题是它非常慢;仅导入 50 条记录大约需要 2 秒。我希望 bulk API 能够帮助我提高速度。

最佳答案

根据hashlib documentationdigest方法将

Return the digest of the data passed to the update() method so far. This is a bytes object of size digest_size which may contain bytes in the whole range from 0 to 255.

因此生成的字节可能无法解码为 un​​icode。

>>> id_ = hashlib.md5('abc'.encode('utf-8')).digest()
>>> id_
b'\x90\x01P\x98<\xd2O\xb0\xd6\x96?}(\xe1\x7fr'
>>> id_.decode('utf-8')
Traceback (most recent call last):
  File "<console>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x90 in position 0: invalid start byte

hexdigest方法将产生一个字符串作为输出;来自docs :

Like digest() except the digest is returned as a string object of double length, containing only hexadecimal digits. This may be used to exchange the value safely in email or other non-binary environments.

>>> id_ = hashlib.md5('abc'.encode('utf-8')).hexdigest()
>>> id_
'900150983cd24fb0d6963f7d28e17f72'

关于python - md5 id批量导入数据到elasticsearch时出现UnicodeDecodeError,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48127047/

相关文章:

python - 由于 channel 信息,灰色图像上的 ROS CvBridge 无法正常工作

python - 无需替换即可高效生成多个 numpy.random.choice 实例

elasticsearch - 使用X-pack启动Elastic搜索时引发异常

rest - Logstash服务器可以从RestURL读取数据

c - strcmp 会按代码点顺序比较 utf-8 字符串吗?

python - 导入错误 : No module named pxssh

python - 如何有效地将大型数据帧拆分为多个 Parquet 文件?

elasticsearch - 如何更新elasticsearch中字段的数据类型

c++ - C++ 11支持Unicode的程度如何?

Python Selenium 页面无法保存源代码编码错误