Python 多处理 : synchronizing file-like object

标签 python multithreading multiprocessing python-2.6 python-multithreading

我正在尝试制作一个类似对象的文件,该文件将在测试期间分配给 sys.stdout/sys.stderr 以提供确定性输出。它并不意味着快速,只是可靠。到目前为止,我所拥有的几乎可以工作,但我需要一些帮助来消除最后几个极端情况错误。

这是我当前的实现。

try:
    from cStringIO import StringIO
except ImportError:
    from StringIO import StringIO

from os import getpid
class MultiProcessFile(object):
    """
    helper for testing multiprocessing

    multiprocessing poses a problem for doctests, since the strategy
    of replacing sys.stdout/stderr with file-like objects then
    inspecting the results won't work: the child processes will
    write to the objects, but the data will not be reflected
    in the parent doctest-ing process.

    The solution is to create file-like objects which will interact with
    multiprocessing in a more desirable way.

    All processes can write to this object, but only the creator can read.
    This allows the testing system to see a unified picture of I/O.
    """
    def __init__(self):
        # per advice at:
        #    http://docs.python.org/library/multiprocessing.html#all-platforms
        from multiprocessing import Queue
        self.__master = getpid()
        self.__queue = Queue()
        self.__buffer = StringIO()
        self.softspace = 0

    def buffer(self):
        if getpid() != self.__master:
            return

        from Queue import Empty
        from collections import defaultdict
        cache = defaultdict(str)
        while True:
            try:
                pid, data = self.__queue.get_nowait()
            except Empty:
                break
            cache[pid] += data
        for pid in sorted(cache):
            self.__buffer.write( '%s wrote: %r\n' % (pid, cache[pid]) )
    def write(self, data):
        self.__queue.put((getpid(), data))
    def __iter__(self):
        "getattr doesn't work for iter()"
        self.buffer()
        return self.__buffer
    def getvalue(self):
        self.buffer()
        return self.__buffer.getvalue()
    def flush(self):
        "meaningless"
        pass

...和一个快速测试脚本:

#!/usr/bin/python2.6

from multiprocessing import Process
from mpfile import MultiProcessFile

def printer(msg):
    print msg

processes = []
for i in range(20):
    processes.append( Process(target=printer, args=(i,), name='printer') )

print 'START'
import sys
buffer = MultiProcessFile()
sys.stdout = buffer

for p in processes:
    p.start()
for p in processes:
    p.join()

for i in range(20):
    print i,
print

sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
print 
print 'DONE'
print
buffer.buffer()
print buffer.getvalue()

此方法在 95% 的情况下都能完美运行,但存在三个边缘情况问题。我必须在一个快速的 while 循环中运行测试脚本来重现这些。

  1. 3% 的时间,父进程的输出没有完全反射(reflect)出来。我假设这是因为数据在队列刷新线程 catch 之前就被消耗了。我还没有想出一种方法来等待线程而不发生死锁。
  2. .5% 的时间,有来自 multiprocess.Queue 实现的回溯
  3. .01% 的时间,PID 环绕,因此按 PID 排序会给出错误的顺序。

在最坏的情况下(几率:七千万分之一),输出将如下所示:

START

DONE

302 wrote: '19\n'
32731 wrote: '0 1 2 3 4 5 6 7 8 '
32732 wrote: '0\n'
32734 wrote: '1\n'
32735 wrote: '2\n'
32736 wrote: '3\n'
32737 wrote: '4\n'
32738 wrote: '5\n'
32743 wrote: '6\n'
32744 wrote: '7\n'
32745 wrote: '8\n'
32749 wrote: '9\n'
32751 wrote: '10\n'
32752 wrote: '11\n'
32753 wrote: '12\n'
32754 wrote: '13\n'
32756 wrote: '14\n'
32757 wrote: '15\n'
32759 wrote: '16\n'
32760 wrote: '17\n'
32761 wrote: '18\n'

Exception in thread QueueFeederThread (most likely raised during interpreter shutdown):
Traceback (most recent call last):
  File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner
  File "/usr/lib/python2.6/threading.py", line 484, in run
      File "/usr/lib/python2.6/multiprocessing/queues.py", line 233, in _feed
<type 'exceptions.TypeError'>: 'NoneType' object is not callable

在 python2.7 中异常略有不同:

Exception in thread QueueFeederThread (most likely raised during interpreter shutdown):
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
  File "/usr/lib/python2.7/threading.py", line 505, in run
  File "/usr/lib/python2.7/multiprocessing/queues.py", line 268, in _feed
<type 'exceptions.IOError'>: [Errno 32] Broken pipe

如何摆脱这些边缘情况?

最佳答案

解决方案分为两部分。我已成功运行测试程序 20 万次,输出没有任何变化。

最简单的部分是使用 multiprocessing.current_process()._identity 对消息进行排序。这不是已发布 API 的一部分,但它是每个进程的唯一、确定性标识符。这解决了 PID 回绕和输出顺序错误的问题。

解决方案的另一部分是使用 multiprocessing.Manager().Queue() 而不是 multiprocessing.Queue。这解决了上面的问题 #2,因为管理器位于一个单独的进程中,因此避免了使用来自拥有进程的队列时的一些糟糕的特殊情况。 #3 已修复,因为 Queue 已完全耗尽,并且在 python 开始关闭并关闭 stdin 之前 feeder 线程自然死亡。

关于Python 多处理 : synchronizing file-like object,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/5821880/

相关文章:

c - 暂停/恢复所有用户进程——这可能吗?

python - 使用多处理时共享无锁的ctypes numpy数组

python - 在组中查找不同的计数

Java - 内存分配困难(GC_FOR_ALLOC)

python - 检查两个 Python 函数是否相等

c# - 使用Reader Writer Lock创建线程安全列表

c - 为什么当线程 A 关闭套接字对的末端时,windows select() 并不总是通知线程 B 的 select()?

python - 无法通过多处理同时调用多个函数

python - 如何在Keras中设置预测阈值?

python - Ascii 字符 27 (ESC) 导致 Python 错误