python - 为什么这个 Python 方法会泄漏内存?

标签 python memory-leaks

此方法遍历数据库中的术语列表,检查这些术语是否在作为参数传递的文本中,如果是,则将其替换为以该术语作为参数的搜索页面链接。

术语数很高(大约 100000),因此过程非常慢,但这没关系,因为它是作为 cron 作业执行的。但是,它会导致脚本内存消耗猛增,我找不到原因:

class SearchedTerm(models.Model):

[...]

@classmethod
def add_search_links_to_text(cls, string, count=3, queryset=None):
    """
        Take a list of all researched terms and search them in the 
        text. If they exist, turn them into links to the search
        page.

        This process is limited to `count` replacements maximum.

        WARNING: because the sites got different URLS schemas, we don't
        provides direct links, but we inject the {% url %} tag 
        so it must be rendered before display. You can use the `eval`
        tag from `libs` for this. Since they got different namespace as
        well, we enter a generic 'namespace' and delegate to the 
        template to change it with the proper one as well.

        If you have a batch process to do, you can pass a query set
        that will be used instead of getting all searched term at
        each calls.
    """

    found = 0

    terms = queryset or cls.on_site.all()

    # to avoid duplicate searched terms to be replaced twice 
    # keep a list of already linkified content
    # added words we are going to insert with the link so they won't match
    # in case of multi passes
    processed = set((u'video', u'streaming', u'title', 
                     u'search', u'namespace', u'href', u'title', 
                     u'url'))

    for term in terms:

        text = term.text.lower()

        # no small word and make
        # quick check to avoid all the rest of the matching
        if len(text) < 3 or text not in string:
            continue

        if found and cls._is_processed(text, processed):
            continue

        # match the search word with accent, for any case
        # ensure this is not part of a word by including 
        # two 'non-letter' character on both ends of the word
        pattern = re.compile(ur'([^\w]|^)(%s)([^\w]|$)' % text, 
                            re.UNICODE|re.IGNORECASE)

        if re.search(pattern, string):
            found += 1

            # create the link string
            # replace the word in the description 
            # use back references (\1, \2, etc) to preserve the original
            # formatin
            # use raw unicode strings (ur"string" notation) to avoid
            # problems with accents and escaping

            query = '-'.join(term.text.split())
            url = ur'{%% url namespace:static-search "%s" %%}' % query
            replace_with = ur'\1<a title="\2 video streaming" href="%s">\2</a>\3' % url

            string = re.sub(pattern, replace_with, string)

            processed.add(text)

            if found >= 3:
                break

    return string

您可能还需要此代码:

class SearchedTerm(models.Model):

[...]

@classmethod
def _is_processed(cls, text, processed):
    """
        Check if the text if part of the already processed string
        we don't use `in` the set, but `in ` each strings of the set
        to avoid subtring matching that will destroy the tags.

        This is mainly an utility function so you probably won't use
        it directly.
    """
    if text in processed:
        return True

    return any(((text in string) for string in processed))

我真的只有两个对象的引用可能是这里的嫌疑人:termsprocessed。但我看不出有什么理由不对它们进行垃圾回收。

编辑:

我想我应该说这个方法是在 Django 模型方法本身内部调用的。我不知道它是否相关,但这是代码:

class Video(models.Model):

[...]

def update_html_description(self, links=3, queryset=None):
    """
        Take a list of all researched terms and search them in the 
        description. If they exist, turn them into links to the search
        engine. Put the reset into `html_description`.

        This use `add_search_link_to_text` and has therefor, the same 
        limitations.

        It DOESN'T call save().
    """
    queryset = queryset or SearchedTerm.objects.filter(sites__in=self.sites.all())
    text = self.description or self.title
    self.html_description = SearchedTerm.add_search_links_to_text(text, 
                                                                  links, 
                                                                  queryset)

我可以想象自动 Python 正则表达式缓存会占用一些内存。但它应该只执行一次,每次调用 update_html_description 时内存消耗都会增加。

问题不仅仅是它消耗了大量内存,问题是它没有释放它:每次调用大约占用 3% 的 ram,最终将其填满并导致脚本因“无法分配内存”而崩溃.

最佳答案

一旦您调用它,整个查询集就会加载到内存中,这会耗尽您的内存。如果结果集很大,您希望获得大量结果,这可能对数据库的命中率更高,但这意味着内存消耗要少得多。

关于python - 为什么这个 Python 方法会泄漏内存?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/6739679/

相关文章:

python - 将 DataFrame 与多索引列合并

memory - Varnish 6 LTS/w CentOS 8 不遵守内存限制?

c - 返回指向自动变量的指针

c - 无法分配信号量/netconn Tiva C TM4C1294

python - xmlsec1 sign 在命令行上有效,但在 Python 代码上失败

python - 使用 Python 进行 SQL 多次插入

c++ - 修复我自己的双向链表中的内存泄漏

silverlight - Silverlight 弱事件的良好实现是什么?

python - Python 中的 Zip 结构尝试仅读取文件夹

python - 生成具有预定义平均值、标准差、最小值和最大值的随机数