我正在尝试确定一个程序/软件,该程序/软件将使我能够有效地提取大量大型CSV文件(总计40+ GB),并输出具有导入到Elasticsearch(ES)所需的特定格式的JSON文件。
jq可以有效地获取如下数据:
file1:
id,age,gender,wave
1,49,M,1
2,72,F,0
file2:
id,time,event1
1,4/20/2095,V39
1,4/21/2095,T21
2,5/17/2094,V39
按ID进行汇总(这样,多个文件中CSV行中的所有JSON文档都属于一个id条目),输出如下所示:
{"index":{"_index":"forum_mat","_type":"subject","_id":"1"}}
{"id":"1","file1":[{"filen":"file1","id":"1","age":"49","gender":"M","wave":"1"}],"file2":[{"filen":"file2","id":"1","time":"4/20/2095","event1":"V39"},{"filen":"file2","id":"1","time":"4/21/2095","event1":"T21"}]}
{"index":{"_index":"forum_mat","_type":"subject","_id":"2"}}
{"id":"2","file1":[{"filen":"file1","id":"2","age":"72","gender":"F","wave":"0"}],"file2":[{"filen":"file2","id":"2","time":"5/17/2094","event1":"V39"}]}
我用Matlab编写了一个脚本,但由于担心它的执行速度慢得多。我可能需要花费数月的时间来处理所有40 GB以上的数据。我是informed,Logstash(ES首选的数据输入工具)不擅长这种聚合。
最佳答案
我相信,以下内容可以满足您的要求,但我不完全理解您的输入文件和包含的输出之间的联系。希望这至少会使您走上正确的道路。
该程序假定所有数据都可以放入内存。它使用JSON对象作为字典来进行快速查找,因此应该表现出色。
此处采用的方法将csv到json的转换与聚合分开,因为可能有更好的方法来进行前者。 (例如,参见the jq Cookbook entry on convert-a-csv-file-with-headers-to-json。)
第一个文件(scsv2json.jq)用于将简单CSV转换为JSON。第二个文件(aggregate.jq)进行聚合。有了这些:
$ (jq -R -s -f scsv2json.jq file1.csv ;\
jq -R -s -f scsv2json.jq file2.csv) |\
jq -s -c -f aggregate.jq
[{"id":"1",
"file1":{"age":"49","gender":"M","wave":"1"},
"file2":{"time":"4/21/2095","event1":"T21"}},
{"id":"2",
"file1":{"age":"72","gender":"F","wave":"0"},
"file2":{"time":"5/17/2094","event1":"V39"}}]
请注意,“id”已从输出中的内部对象中删除。
集合.jq:
# Input: an array of objects, each with an "id" field
# such that (tostring|.id) is an index.
# Output: a dictionary keyed by the id field.
def todictionary:
reduce .[] as $row ( {}; . + { ($row.id | tostring): $row } );
def aggregate:
.[0] as $file1
| .[1] as $file2
| ($file1 | todictionary) as $d1
| ($file2 | todictionary) as $d2
| ( [$file1[].id] + [$file2[].id] | unique ) as $keys
| reduce ($keys[] | tostring) as $k
( [];
. + [{"id": $k,
"file1": ($d1[$k] | del(.id)),
"file2": ($d2[$k] | del(.id)) }] );
aggregate
scsv2json.jq
def objectify(headers):
. as $in
| reduce range(0; headers|length) as $i
({}; .[headers[$i]] = ($in[$i]) );
def csv2table:
def trim: sub("^ +";"") | sub(" +$";"");
split("\n") | map( split(",") | map(trim) );
def csv2json:
csv2table
| .[0] as $headers
| reduce (.[1:][] | select(length > 0) ) as $row
( []; . + [ $row|objectify($headers) ]);
csv2json
上面假设正在使用具有正则表达式支持的jq版本。如果您的jq不支持正则表达式,则只需省略修剪。
关于json - jq可以跨文件执行聚合,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33535186/