Python多处理需要更长的时间

标签 python multithreading multiprocessing

我正在尝试使用 python multiprocessing 模块来减少过滤代码的时间。一开始我做了一些实验。结果并不乐观。

我已经定义了一个在一定范围内运行循环的函数。然后我在有和没有线程的情况下运行这个函数并测量时间。这是我的代码:

import time
from multiprocessing.pool import ThreadPool

def do_loop(i,j):
    l = []
    for i in range(i,j):
        l.append(i)
    return l

#loop veriable
x = 7

#without thredding
start_time = time.time()
c = do_loop(0,10**x)
print("--- %s seconds ---" % (time.time() - start_time))

#with thredding
def thread_work(n):
    #dividing loop size
    a = 0
    b = int(n/2)
    c = int(n/2)
    #multiprocessing
    pool = ThreadPool(processes=10)
    async_result1 = pool.apply_async(do_loop, (a,b))
    async_result2 = pool.apply_async(do_loop, (b,c))
    async_result3 = pool.apply_async(do_loop, (c,n))
    #get the result from all processes]
    result = async_result1.get() + async_result2.get() + async_result3.get()

    return result

start_time = time.time()
ll = thread_work(10**x)
print("--- %s seconds ---" % (time.time() - start_time))

对于 x=7,结果是:

--- 1.0931916236877441 seconds ---
--- 1.4213247299194336 seconds ---

没有线程,它花费的时间更少。这是另一个问题。对于 X=8,大多数时候我都会得到 MemoryError 线程。一旦我得到这个结果:

--- 17.04124426841736 seconds ---
--- 32.871358156204224 seconds ---

解决方案很重要,因为我需要优化 filtering task这需要 6 个小时。

最佳答案

根据您的任务,多处理可能会或可能不会花费更长的时间。 如果您想利用您的 CPU 内核并加快过滤过程,那么您应该 use multiprocessing.Pool

offers a convenient means of parallelizing the execution of a function across multiple input values, distributing the input data across processes (data parallelism).

我一直在创建一个数据过滤示例,然后我一直在测量简单方法的时间和多进程方法的时间。 (从您的代码开始)

# take only the sentences that ends in "we are what we dream",  the second word is "are"


import time
from multiprocessing.pool import Pool

LEN_FILTER_SENTENCE = len('we are what we dream')
num_process = 10

def do_loop(sentences):
    l = []
    for sentence in sentences:
        if sentence[-LEN_FILTER_SENTENCE:].lower() =='we are what we doing' and sentence.split()[1] == 'are':     
            l.append(sentence)
    return l

#with thredding
def thread_work(sentences):
    #multiprocessing

    pool = Pool(processes=num_process)
    pool_food = (sentences[i: i + num_process] for i in range(0, len(sentences), num_process))
    result = pool.map(do_loop, pool_food)
    return result

def test(data_size=5, sentence_size=100):
    to_be_filtered = ['we are what we doing'*sentence_size] * 10 ** data_size + ['we are what we dream'*sentence_size] * 10 ** data_size

    start_time = time.time()
    c = do_loop(to_be_filtered)
    simple_time = (time.time() - start_time)



    start_time = time.time()
    ll = [e for l in thread_work(to_be_filtered) for e in l]
    multiprocessing_time = (time.time() - start_time)
    assert c == ll 
    return simple_time, multiprocessing_time

data_size 表示数据的长度,sentence_size 是每个数据元素的乘积因子,您可以看到 sentence_size 与数据中每个项目请求的 CPU 操作数成正比。

data_size = [1, 2, 3, 4, 5, 6]
results = {i: {'simple_time': [], 'multiprocessing_time': []} for i in data_size}
sentence_size = list(range(1, 500, 100))
for size in data_size:
    for s_size in sentence_size:
        simple_time, multiprocessing_time = test(size, s_size)
        results[size]['simple_time'].append(simple_time)
        results[size]['multiprocessing_time'].append(multiprocessing_time)

import pandas as pd

df_small_data = pd.DataFrame({'simple_data_size_1': results[1]['simple_time'],
                   'simple_data_size_2': results[2]['simple_time'],
                   'simple_data_size_3': results[3]['simple_time'],
                   'multiprocessing_data_size_1': results[1]['multiprocessing_time'],
                   'multiprocessing_data_size_2': results[2]['multiprocessing_time'],
                   'multiprocessing_data_size_3': results[3]['multiprocessing_time'],

                   'sentence_size': sentence_size})

df_big_data = pd.DataFrame({'simple_data_size_4': results[4]['simple_time'],
                   'simple_data_size_5': results[5]['simple_time'],
                   'simple_data_size_6': results[6]['simple_time'],
                   'multiprocessing_data_size_4': results[4]['multiprocessing_time'],
                   'multiprocessing_data_size_5': results[5]['multiprocessing_time'],
                   'multiprocessing_data_size_6': results[6]['multiprocessing_time'],

                   'sentence_size': sentence_size})

绘制小数据的时序:

ax = df_small_data.set_index('sentence_size').plot(figsize=(20, 10), title = 'Simple vs multiprocessing approach for small data')
ax.set_ylabel('Time in seconds')

enter image description here

绘制大数据(相对大数据)的时序: enter image description here

如您所见,当您拥有大数据且每个数据元素都需要相对强大的 CPU 能力时,多处理能力就会显现出来。

关于Python多处理需要更长的时间,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57544734/

相关文章:

linux - 同步shell脚本执行

python - 爬取时存储 URL

java - 从 JavaFX 中的不同线程更新 UI

c - 在启用线程和 float 的情况下使用 fftw3.3.6 时 undefined reference

java - SwingWorker 扩展类未显示覆盖的方法

python - 绘图时多重处理不起作用

python - 为什么 dill 慢?

python - 如何在 Pandas DataFrame 上添加列标签

python - 如何在 Pandas 中获得 "group by"单元格值?

Python 用循环多重处理一个大列表