Python:垃圾收集失败?

标签 python garbage-collection

考虑以下脚本:

l = [i for i in range(int(1e8))]
l = []
import gc
gc.collect()
# 0
gc.get_referrers(l)
# [{'__builtins__': <module '__builtin__' (built-in)>, 'l': [], '__package__': None, 'i': 99999999, 'gc': <module 'gc' (built-in)>, '__name__': '__main__', '__doc__': None}]
del l
gc.collect()
# 0

关键是,在完成所有这些步骤之后,此 python 进程的内存使用率在我的机器上约为 30%(Python 2.6.5,可根据要求提供更多详细信息?)。 以下是 top 输出的摘录:

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND  
5478 moooeeeep 20   0 2397m 2.3g 3428 S    0 29.8   0:09.15 ipython  

响应。 ps辅助:

moooeeeep 5478  1.0 29.7 2454720 2413516 pts/2 S+   12:39   0:09 /usr/bin/python /usr/bin/ipython gctest.py

根据 the docs对于 gc.collect:

Not all items in some free lists may be freed due to the particular implementation, in particular int and float.

这是否意味着,如果我(暂时)需要大量不同的 intfloat 数字,我需要将其导出到 C/C++,因为 Python GC释放内存失败?


更新

可能解释器是罪魁祸首,因为this article建议:

It’s that you’ve created 5 million integers simultaneously alive, and each int object consumes 12 bytes. “For speed”, Python maintains an internal free list for integer objects. Unfortunately, that free list is both immortal and unbounded in size. floats also use an immortal & unbounded free list.

但是问题仍然存在,因为我无法避免这种数据量(来自外部源的时间戳/值对)。我真的被迫放弃 Python 并回到 C/C++ 吗?


更新 2

可能确实是 Python 实现导致了问题。找到this answer最终解释问题和可能的解决方法。

最佳答案

发现这个也要回答by Alex Martelli in another thread .

Unfortunately (depending on your version and release of Python) some types of objects use "free lists" which are a neat local optimization but may cause memory fragmentation, specifically by making more an more memory "earmarked" for only objects of a certain type and thereby unavailable to the "general fund".

The only really reliable way to ensure that a large but temporary use of memory DOES return all resources to the system when it's done, is to have that use happen in a subprocess, which does the memory-hungry work then terminates. Under such conditions, the operating system WILL do its job, and gladly recycle all the resources the subprocess may have gobbled up. Fortunately, the multiprocessing module makes this kind of operation (which used to be rather a pain) not too bad in modern versions of Python.

In your use case, it seems that the best way for the subprocesses to accumulate some results and yet ensure those results are available to the main process is to use semi-temporary files (by semi-temporary I mean, NOT the kind of files that automatically go away when closed, just ordinary files that you explicitly delete when you're all done with them).

幸运的是,我能够将内存密集型工作拆分为单独的 block ,使解释器能够在每次迭代后实际释放临时内存。我使用以下包装器将内存密集型函数作为子进程运行:

import multiprocessing

def run_as_process(func, *args):
    p = multiprocessing.Process(target=func, args=args)
    try:
        p.start()
        p.join()
    finally:
        p.terminate()

关于Python:垃圾收集失败?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/9617001/

相关文章:

Java gc 不会自动清理内存

java - 通过 Spring Batch 部分读取和写入数据 - OutOfMemoryError : GC overhead limit exceeded

memory-management - 为什么操作系统没有垃圾收集器?

Python:如何在 Flask 应用程序的表格中显示来自 MySQL 查询的数据

python - 在 virtualenv 中安装 psycopg2 (Ubuntu 10.04, Python 2.5)

garbage-collection - 生成的 C 中的高效 GC 写入屏障

java - Java 中的垃圾收集器会自动工作吗?

python - 如何在 python asyncio 中处理 tcp 客户端套接字自动重新连接?

python - 请求.exceptions.SSLError : [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl. c:590)

python - 如何在 Python 中随机排列多个列表或数组?