python - 为什么 60GB 内存在 MySQL 连接器 fetchall() 上消失?

标签 python memory-management memory-leaks mysql-connector-python

MySQL 5.7.18
Python 2.7.5
Pandas 0.17.1
CentOS 7.3

MySQL 表:

CREATE TABLE test (
  id varchar(12)
) ENGINE=InnoDB;

大小为10GB。

select round(((data_length) / 1024 / 1024 / 1024)) "GB"
from information_schema.tables 
where table_name = "test"

10GB

盒子有250GB内存:

$ free -hm
              total        used        free      shared  buff/cache   available
Mem:           251G         15G        214G        2.3G         21G        232G
Swap:          2.0G        1.2G        839M

选择数据:

import psutil
print '1 ' + str(psutil.phymem_usage())

import os
import sys
import time
import pyodbc 
import mysql.connector
import pandas as pd
from datetime import date
import gc
print '2 ' + str(psutil.phymem_usage())

db = mysql.connector.connect({snip})
c = db.cursor()
print '3 ' + str(psutil.phymem_usage())

c.execute("select id from test")
print '4 ' + str(psutil.phymem_usage())

e=c.fetchall()
print 'getsizeof: ' + str(sys.getsizeof(e))
print '5 ' + str(psutil.phymem_usage())

d=pd.DataFrame(e)
print d.info()
print '6 ' + str(psutil.phymem_usage())

c.close()
print '7 ' + str(psutil.phymem_usage())

db.close()
print '8 ' + str(psutil.phymem_usage())

del c, db, e
print '9 ' + str(psutil.phymem_usage())

gc.collect()
print '10 ' + str(psutil.phymem_usage())

time.sleep(60)
print '11 ' + str(psutil.phymem_usage())

输出:

1 svmem(total=270194331648L, available=249765777408L, percent=7.6, used=39435464704L, free=230758866944L, active=20528222208, inactive=13648789504, buffers=345387008L, cached=18661523456)
2 svmem(total=270194331648L, available=249729019904L, percent=7.6, used=39472222208L, free=230722109440L, active=20563484672, inactive=13648793600, buffers=345387008L, cached=18661523456)
3 svmem(total=270194331648L, available=249729019904L, percent=7.6, used=39472222208L, free=230722109440L, active=20563484672, inactive=13648793600, buffers=345387008L, cached=18661523456)
4 svmem(total=270194331648L, available=249729019904L, percent=7.6, used=39472222208L, free=230722109440L, active=20563484672, inactive=13648793600, buffers=345387008L, cached=18661523456)
getsizeof: 1960771816
5 svmem(total=270194331648L, available=181568315392L, percent=32.8, used=107641655296L, free=162552676352L, active=88588271616, inactive=13656334336, buffers=345395200L, cached=18670243840)
<class 'pandas.core.frame.DataFrame'>
Int64Index: 231246823 entries, 0 to 231246822
Data columns (total 1 columns):
0    object
dtypes: object(1)
memory usage: 3.4+ GB
None
6 svmem(total=270194331648L, available=181571620864L, percent=32.8, used=107638353920L, free=162555977728L, active=88587603968, inactive=13656334336, buffers=345395200L, cached=18670247936)
7 svmem(total=270194331648L, available=181571620864L, percent=32.8, used=107638353920L, free=162555977728L, active=88587603968, inactive=13656334336, buffers=345395200L, cached=18670247936)
8 svmem(total=270194331648L, available=181571620864L, percent=32.8, used=107638353920L, free=162555977728L, active=88587603968, inactive=13656334336, buffers=345395200L, cached=18670247936)
9 svmem(total=270194331648L, available=183428308992L, percent=32.1, used=105781678080L, free=164412653568L, active=86735921152, inactive=13656334336, buffers=345395200L, cached=18670260224)
10 svmem(total=270194331648L, available=183428308992L, percent=32.1, used=105781678080L, free=164412653568L, active=86735921152, inactive=13656334336, buffers=345395200L, cached=18670260224)
11 svmem(total=270194331648L, available=183427203072L, percent=32.1, used=105782812672L, free=164411518976L, active=86736560128, inactive=13656330240, buffers=345395200L, cached=18670288896)

我什至删除了数据库连接并调用了垃圾回收。

一个 10GB 的表怎么会用掉我的 60GB 内存?

最佳答案

简短的回答:Python 数据结构内存开销。

您的表约有 231M 行,占用约 10GB,因此每行大约有 4 个字节。

fetchall 将其转换为如下所示的元组列表:

[('abcd',), ('1234',), ... ]

您的列表有约 231M 元素并使用约 19GB 内存:平均每个元组使用 8.48 字节。

$ python
Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys

元组:

>>> a = ('abcd',)
>>> sys.getsizeof(a)
64

一个元组的列表:

>>> al = [('abcd',)]
>>> sys.getsizeof(al)
80

两个元组的列表:

>>> al2 = [('abcd',), ('1234',)]
>>> sys.getsizeof(al2)
88

包含 10 个元组的列表:

>>> al10 = [ ('abcd',) for x in range(10)]
>>> sys.getsizeof(al10)
200

包含 1M 元组的列表:

>>> a_realy_long = [ ('abcd',) for x in range(1000000)]
>>> sys.getsizeof(a_realy_long )
8697472

几乎是我们的数字:列表中每个元组 8.6 字节。

不幸的是,您在这里无能为力:mysql.connector 选择数据结构和 dict cursor会使用更多内存。

如果您需要减少内存使用量,则必须使用 fetchmany具有合适的尺寸参数。

关于python - 为什么 60GB 内存在 MySQL 连接器 fetchall() 上消失?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52133002/

相关文章:

python - 如何从 ViewSet 类创建 Django REST Framework View 实例?

c - 这个程序是如何设法分配这么多内存的?

c# - WCF - 使用 "DataSet"通过 NetTcpBinding 传输数据好吗

ios - 了解task_basic_info任务resident_size

c++ - SDL2有内存泄漏吗?

python - 聊天机器人需要很长时间才能返回响应

Python 3.8+ 使用英安岩进行嵌套数据对象类型验证

Java:clear()大尺寸列表有助于快速垃圾收集吗?

c - 内存泄漏链表

python - 如何在tensorflow中实现sklearn的PolynomialFeatures?