我正在尝试一次处理多个文件,其中每个文件将生成数据 block 以同时提供给具有特定大小限制的队列。 例如,如果有 5 个文件,每个文件包含 100 万个元素,我想将每个文件中的 100 个元素提供给另一个生成器,该生成器一次生成 500 个元素。
这是我到目前为止一直在尝试的方法,但遇到了无法pickle生成器
错误:
import os
from itertools import islice
import multiprocessing as mp
import numpy as np
class File(object):
def __init__(self, data_params):
data_len = 100000
self.large_data = np.array([data_params + str(i) for i in np.arange(0, data_len)])
def __iter__(self):
for i in self.large_data:
yield i
def parse_file(file_path):
# differnt filepaths yeild different data obviously
# here we just emulate with something silly
if file_path == 'elephant_file':
p = File(data_params = 'elephant')
if file_path == 'number_file':
p = File(data_params = 'number')
if file_path == 'horse_file':
p = File(data_params = 'horse')
yield from p
def parse_dir(user_given_dir, chunksize = 10):
pool = mp.Pool(4)
paths = ['elephant_file', 'number_file', 'horse_file'] #[os.path.join(user_given_dir, p) for p in os.listdir(user_given_dir)]
# Works, but not simultaneously on all paths
# for path in paths:
# data_gen = parse_file(path)
# parsed_data_batch = True
# while parsed_data_batch:
# parsed_data_batch = list(islice(data_gen, chunksize))
# yield parsed_data_batch
# Doesn't work
for objs in pool.imap(parse_file, paths, chunksize = chunksize):
for o in objs:
yield o
it = parse_dir('.')
for ix, o in enumerate(it):
print(o) # hopefully just prints 10 elephants, horses and numbers
if ix>2: break
有人知道如何获得所需的行为吗?
最佳答案
对于pickle错误:
parse_file
是一个生成器,而不是常规函数,因为它内部使用yield
。并且
多处理
需要一个函数作为任务来执行。因此,您应该在parse_file()
中将 如果您想从所有文件中逐个获取记录,请尝试在
parse_dir()
中使用zip
:iterators = [ iter(e) for e in pool.imap(parse_file, paths, chunksize=chunksize) ] while True: batch = [ o for i in iterators for _, o in zip(range(100), i) # e.g., 100 ] if batch: yield batch else: return
yield from p
替换为 return p
关于python - 使用生成器对多个文件进行多重处理以及解决 TypeError ("can' t pickle 生成器对象的方法”),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55095679/