python top N字数,为什么多进程比单进程慢

标签 python multiprocessing

我正在使用单进程版本的 python 进行频率字数统计:

#coding=utf-8
import string
import time
from collections import Counter
starttime = time.clock()
origin = open("document.txt", 'r').read().lower()
for_split = [',','\n','\t','\'','.','\"','!','?','-', '~']

#the words below will be ignoered when counting
ignored = ['the', 'and', 'i', 'to', 'of', 'a', 'in', 'was', 'that', 'had',
       'he', 'you', 'his','my', 'it', 'as', 'with', 'her', 'for', 'on']
i=0
for ch in for_split:
    origin = string.replace(origin, ch, ' ')
words = string.split(origin)
result = Counter(words).most_common(40)
for word, frequency in result:
    if not word in ignored and i < 10:
        print "%s : %d" % (word, frequency)
        i = i+1
print time.clock() - starttime

然后多处理版本看起来像:

#coding=utf-8
import time
import multiprocessing
from collections import Counter
for_split = [',','\n','\t','\'','.','\"','!','?','-', '~']
ignored = ['the', 'and', 'i', 'to', 'of', 'a', 'in', 'was', 'that', 'had',
       'he', 'you', 'his','my', 'it', 'as', 'with', 'her', 'for', 'on']
result_list = []

def worker(substr):
    result = Counter(substr)
    return result

def log_result(result):
    result_list.append(result)

def main():
    pool = multiprocessing.Pool(processes=5)
    origin = open("document.txt", 'r').read().lower()
 for ch in for_split:
         origin = origin.replace(ch, ' ')
    words = origin.split()
    step = len(words)/4
        substrs = [words[pos : pos+step] for pos in range(0, len(words), step)]
    result = Counter()
    for substr in substrs:
        pool.apply_async(worker, args=(substr,), callback = log_result)
    pool.close()
    pool.join()
    result = Counter()
    for item in result_list:
        result = result + item
    result = result.most_common(40)
    i=0
    for word, frequency in result:
        if not word in ignored and i < 10:
            print "%s : %d" % (word, frequency)
            i = i+1

if __name__ == "__main__":
        starttime = time.clock()
        main()
        print time.clock() - starttime

“document.txt”大约22M,我的笔记本电脑有核心,2G内存,第一个版本的结果是3.27s,第二个是8.15s,我改变了进程数( pool = multiprocessing.Pool(processes=5)),从2到10,结果几乎一样,这是为什么呢,如何让这个程序比单进程版本运行得更快?

最佳答案

我认为这是与将各个字符串分发给工作人员并接收结果相关的开销。如果我使用示例文档(Dostojevski 的“罪与罚”)运行上面给出的并行代码,运行大约需要 0.32 秒,而单进程版本仅需 0.09 秒。如果我将 worker 函数修改为仅处理字符串“test”而不是真实文档(仍将真实字符串作为参数传递),则运行时间将下降到 0.22 秒。但是,如果我将“test”作为参数传递给 map_async 函数,则运行时间会减少到 0.06 秒。因此,我会说,在您的情况下,程序的运行时间受到进程间通信开销的限制。

使用以下代码,我将并行版本的运行时间降低到 0.08 秒:首先,我将文件分成许多长度(几乎)相等的 block ,确保各个 block 之间的边界与新队。然后,我简单地将 block 的长度和偏移量传递给每个工作进程,让它打开文件,读取 block ,处理并返回结果。与通过 map_async 函数直接分发字符串相比,这似乎导致的开销要少得多。对于较大的文件大小,您应该能够使用此代码看到运行时的改进。此外,如果您可以容忍小的计数错误,则可以省略确定正确 block 边界的步骤,而只是将文件分成同样大的 block 。在我的示例中,这将运行时间缩短到 0.04 秒,使 mp 代码比单进程代码更快。

#coding=utf-8
import time
import multiprocessing
import string
from collections import Counter
import os
for_split = [',','\n','\t','\'','.','\"','!','?','-', '~']
ignored = ['the', 'and', 'i', 'to', 'of', 'a', 'in', 'was', 'that', 'had',
       'he', 'you', 'his','my', 'it', 'as', 'with', 'her', 'for', 'on']
result_list = []

def worker(offset,length,filename):
    origin = open(filename, 'r')
    origin.seek(offset)
    content = origin.read(length).lower()

    for ch in for_split:
         content = content.replace(ch, ' ')

    words = string.split(content)
    result = Counter(words)
    origin.close()
    return result

def log_result(result):
    result_list.append(result)

def main():
    processes = 5
    pool = multiprocessing.Pool(processes=processes)
    filename = "document.txt"
    file_size = os.stat(filename)[6]
    chunks = []
    origin = open(filename, 'r')
    while True:
        lines = origin.readlines(file_size/processes)
        if not lines:
            break
        chunks.append("\n".join(lines))

    lengths = [len(chunk) for chunk in chunks]
    offset = 0

    for length in lengths:
        pool.apply_async(worker, args=(offset,length,filename,), callback = log_result)
        offset += length

    pool.close()
    pool.join()
    result = Counter()
    for item in result_list:
        result = result + item
    result = result.most_common(40)
    i=0
    for word, frequency in result:
        if not word in ignored and i < 10:
            print "%s : %d" % (word, frequency)
            i = i+1
if __name__ == "__main__":
    starttime = time.clock()
    main()
    print time.clock() - starttime

关于python top N字数,为什么多进程比单进程慢,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/18300785/

相关文章:

python - 在 bash 脚本中调用 python 脚本会更改变量的类型

python - self 指的是新创建的对象;在其他类方法中,它指的是调用其方法的实例。

python - OpenCV 错误:参数 '%s' 需要 Ptr<cv::UMat>

Python 多处理管道不会正确接收 ()

python - 如何在类中并行化 python 中的 for?

python - 如何将包含属性的 CSV 文件链接到 Python 类?

python - 基于图形的元组合并?

python - 什么时候可以 pickle Python 对象

algorithm - 这种多处理器线程调度算法是否适用于所有情况?

python - 在 Linux 上创建线程与进程的开销