Python multiprocess.Pool.map 不能处理大数组。

标签 python pandas multiprocess

这是我用来在 pandas.DataFrame 对象的行上解析应用函数的代码:

from multiprocessing import cpu_count, Pool
from functools import partial

def parallel_applymap_df(df: DataFrame, func, num_cores=cpu_count(),**kargs):

partitions = np.linspace(0, len(df), num_cores + 1, dtype=np.int64)
df_split = [df.iloc[partitions[i]:partitions[i + 1]] for i in range(num_cores)]
pool = Pool(num_cores)
series = pd.concat(pool.map(partial(apply_wrapper, func=func, **kargs), df_split))
pool.close()
pool.join()

return series

它适用于 200 000 行的子样本,但是当我尝试完整的 200 000 000 个示例时,我收到以下错误消息:
~/anaconda3/lib/python3.6/site-packages/multiprocess/connection.py in _send_bytes(self, buf)
394         n = len(buf)
395         # For wire compatibility with 3.2 and lower
—> 396         header = struct.pack("!i", n)
397         if n > 16384:
398             # The payload is large so Nagle's algorithm won't be triggered

error: 'i' format requires -2147483648 <= number <= 2147483647

由线路生成:
series = pd.concat(pool.map(partial(apply_wrapper, func=func, **kargs), df_split))

这很奇怪,因为我用来并行化在 Pandas 中未矢量化的操作(如 Series.dt.time)的一个略有不同的版本适用于相同数量的行。这是示例版本的作品:
def parallel_map_df(df: DataFrame, func, num_cores=cpu_count()):

partitions = np.linspace(0, len(df), num_cores + 1, dtype=np.int64)
df_split = [df.iloc[partitions[i]:partitions[i + 1]] for i in range(num_cores)]
pool = Pool(num_cores)
df = pd.concat(pool.map(func, df_split))
pool.close()
pool.join()

return df

最佳答案

错误本身来自这样一个事实,即多处理在池中的不同工作人员之间建立连接。要向该 worker 发送数据或从该 worker 发送数据,必须以字节为单位发送数据。第一步是为将发送给工作人员的消息创建一个 header 。此 header 包含作为整数的缓冲区长度。但是,如果缓冲区的长度大于可以用整数表示的长度,则代码将失败并产生您显示的错误。
我们缺少重现您的问题所需的数据和相当多的代码,因此我将提供一个最小的工作示例:

import numpy
import pandas
import random

from typing import List
from multiprocessing import cpu_count, Pool


def parallel_applymap_df(
    input_dataframe: pandas.DataFrame, func, num_cores: int = cpu_count(), **kwargs
) -> pandas.DataFrame:

    # Create splits in the dataframe of equal size (one split will be processed by one core)
    partitions = numpy.linspace(
        0, len(input_dataframe), num_cores + 1, dtype=numpy.int64
    )
    splits = [
        input_dataframe.iloc[partitions[i] : partitions[i + 1]]
        for i in range(num_cores)
    ]

    # Just for debugging, add metadata to each split
    for index, split in enumerate(splits):
        split.attrs["split_index"] = index

    # Create a pool of workers
    with Pool(num_cores) as pool:

        # Map the splits in the dataframe to workers in the pool
        result: List[pandas.DataFrame] = pool.map(func, splits, **kwargs)

    # Combine all results of the workers into a new dataframe
    return pandas.concat(result)


if __name__ == "__main__":

    # Create some test data
    df = pandas.DataFrame([{"A": random.randint(0, 100)} for _ in range(200000000)])

    def worker(df: pandas.DataFrame) -> pandas.DataFrame:

        # Print the length of the dataframe being processed (for debugging)
        print("Working on split #", df.attrs["split_index"], "Length:", len(df))

        # Do some arbitrary stuff to the split of the dataframe
        df["B"] = df.apply(lambda row: f"test_{row['A']}", axis=1)

        # Return the result
        return df

    # Create a new dataframe by applying the worker function to the dataframe in parallel
    df = parallel_applymap_df(df, worker)
    print(df)
请注意,这可能不是执行此操作的最快方法。如需更快的替代方案,请查看 swifterdask .

关于Python multiprocess.Pool.map 不能处理大数组。,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50002665/

相关文章:

python - 将 pandas 数据框列及其顺序保留在数据透视表中

c - 多进程程序 : ftok vs IPC_PRIVATE

fork - 使用 fork 和 MPI 进行多进程编程的区别

python - 带有可选捕获组和否定前瞻的正则表达式

python - 如何在交互式 shell 中检查表达式的逻辑值?

Python - 按日期透视日志数据

python - 如何重新索引 pandas dataframe 以将起始索引值重置为零?

python - 多处理搜索,无需在内存中重复索引

python - 如何从 Python 的官方文档中读取函数签名

Python 到 ruby​​ 的转换