javascript - 在 Nodejs 中解析大型 JSON 文件并独立处理每个对象

标签 javascript json node.js parsing

我需要在 Nodejs 中读取一个大的 JSON 文件(大约 630MB)并将每个对象插入到 MongoDB。

我在这里阅读了答案:Parse large JSON file in Nodejs .

但是,那里的答案是逐行处理 JSON 文件,而不是逐个对象处理它。因此,我仍然不知道如何从这个文件中获取一个对象并对其进行操作。

我的 JSON 文件中有大约 100,000 个此类对象。

数据格式:

[
  {
    "id": "0000000",
    "name": "Donna Blak",
    "livingSuburb": "Tingalpa",
    "age": 53,
    "nearestHospital": "Royal Children's Hospital",
    "treatments": {
        "19890803": {
            "medicine": "Stomach flu B",
            "disease": "Stomach flu"
        },
        "19740112": {
            "medicine": "Progeria C",
            "disease": "Progeria"
        },
        "19830206": {
            "medicine": "Poliomyelitis B",
            "disease": "Poliomyelitis"
        }
    },
    "class": "patient"
  },
 ...
]

干杯,

亚历克斯

最佳答案

有一个不错的模块,名为 'stream-json'这正是你想要的。

It can parse JSON files far exceeding available memory.

StreamArray handles a frequent use case: a huge array of relatively small objects similar to Django-produced database dumps. It streams array components individually taking care of assembling them automatically.

这是一个非常基本的例子:

const StreamArray = require('stream-json/streamers/StreamArray');
const path = require('path');
const fs = require('fs');

const jsonStream = StreamArray.withParser();

//You'll get json objects here
//Key is an array-index here
jsonStream.on('data', ({key, value}) => {
    console.log(key, value);
});

jsonStream.on('end', () => {
    console.log('All done');
});

const filename = path.join(__dirname, 'sample.json');
fs.createReadStream(filename).pipe(jsonStream.input);

如果您想做一些更复杂的事情,例如依次处理一个对象(保持顺序)并为每个对象应用一些异步操作,然后您可以像这样执行自定义 Writeable 流:

const StreamArray = require('stream-json/streamers/StreamArray');
const {Writable} = require('stream');
const path = require('path');
const fs = require('fs');

const fileStream = fs.createReadStream(path.join(__dirname, 'sample.json'));
const jsonStream = StreamArray.withParser();

const processingStream = new Writable({
    write({key, value}, encoding, callback) {
        //Save to mongo or do any other async actions

        setTimeout(() => {
            console.log(value);
            //Next record will be read only current one is fully processed
            callback();
        }, 1000);
    },
    //Don't skip this, as we need to operate with objects, not buffers
    objectMode: true
});

//Pipe the streams as follows
fileStream.pipe(jsonStream.input);
jsonStream.pipe(processingStream);

//So we're waiting for the 'finish' event when everything is done.
processingStream.on('finish', () => console.log('All done'));

请注意: 上面的示例针对“stream-json@1.1.3”进行了测试。对于某些以前的版本(大概是 1.0.0 之前的版本),您可能必须:

const StreamArray = require('stream-json/utils/StreamArray');

然后

const jsonStream = StreamArray.make();

关于javascript - 在 Nodejs 中解析大型 JSON 文件并独立处理每个对象,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42896447/

相关文章:

javascript - JavaScript .on() 方法是如何定义的?

node.js - 在 meteor : How to use a npm library that should be pulled from a repository?

javascript - 同时使用 ng-show 和 ng-repeat

javascript - 以编程方式使用 dojo 显示图像

json - Swift 从循环内部追加到数组

java - 使用 Gson 添加现有的 json 字符串

javascript - 在 JavaScript 中异步创建独特的对象属性是否安全?

javascript - Jquery删除元素

javascript - 在节拍器中为浏览器准确定时的声音

java - 无法在 json 数组中创建两次相同的参数值