我正在尝试将某些数据输出为elasticsearch批量导入格式。这需要两行JL,如下所示:
{"index": {"_type": "media", "_id": "https://macaulaylibrary.org/asset/75247", "_index": "audiomnia_dev"}}
{"description": "Macaulay Library ML75247; aracari sp.; Pteroglossus sp.; \u00a9\u00a0Curtis Marantz; Lago Sachavacaya Trail, right bank Rio Tambopata, Madre de Dios, Peru; 23 Aug 1994", "creator": "Curtis Marantz", "url": "https://macaulaylibrary.org/asset/75247", "image": "https://macaulaylibrary.org/media/Spectrograms/audio/image/320/0/75/75247.jpg", "commonName": "aracari sp.", "fileFormat": "audio", "sciName": "Pteroglossus sp.", "dateCreated": "1994-08-23T08:13:00", "geo": {"lat": "-12.9", "lon": "-69.3667"}, "contentLocation": "Lago Sachavacaya Trail, right bank Rio Tambopata, Madre de Dios, Peru", "name": "ML75247 aracari sp. Macaulay Library"}
有没有办法在Scrapy中可靠地做到这一点?我有以下内容,但是发生了竞争情况,在某些情况下,它弄乱了行的顺序,这导致Elasticsearch批量API阻塞了:
yield { "index" : {
"_index" : "audiomnia_dev",
"_type" : "media",
"_id" : json_ld["url"] }
}
yield json_ld
在保持jl的两行同时仍然遵循generator / yield模式的正确方法是什么?
最佳答案
让Spider产生包含所有相关数据的单个对象,然后编写自定义item exporter对其进行正确格式以进行Elasticsearch。
关于elasticsearch - 难于对Elasticsearch批量导入?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51011099/