multiprocessing.Process
生成的进程所消耗的内存在进程加入后是否会被释放?
我想到的场景大致是这样的:
from multiprocessing import Process
from multiprocessing import Queue
import time
import os
def main():
tasks = Queue()
for task in [1, 18, 1, 2, 5, 2]:
tasks.put(task)
num_proc = 3 # this many workers @ each point in time
procs = []
for j in range(num_proc):
p = Process(target = run_q, args = (tasks,))
procs.append(p)
p.start()
# joines a worker once he's done
while procs:
for p in procs:
if not p.is_alive():
p.join() # what happens to the memory allocated by run()?
procs.remove(p)
print p, len(procs)
time.sleep(1)
def run_q(task_q):
while not task_q.empty(): # while's stuff to do, keep working
task = task_q.get()
run(task)
def run(x): # do real work, allocates memory
print x, os.getpid()
time.sleep(3*x)
if __name__ == "__main__":
main()
在实际代码中,任务
的长度远大于CPU核心的数量,每个任务
都是轻量级的,不同的任务占用的CPU时间差异很大(分钟到几天)以及截然不同的内存量(从花生米到几 GB)。所有这些内存都是 run
的本地内存,无需共享它 --- 所以问题是一旦 run
返回和/或一旦进程已加入。
最佳答案
进程消耗的内存在进程终止时被释放。在您的示例中,当 run_q() 返回时会发生这种情况。
关于python - python 多处理的内存使用情况,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12924792/