我有大约 100 Gb 大小的文本文件,格式如下(包含线路、ips 和域的重复记录):
domain|ip
yahoo.com|89.45.3.5
bbc.com|45.67.33.2
yahoo.com|89.45.3.5
myname.com|45.67.33.2
etc.
我正在尝试使用以下 python 代码解析它们,但我仍然遇到内存错误。有人知道解析此类文件的更佳方法吗? (时间对我来说是一个重要的因素)
files = glob(path)
for filename in files:
print(filename)
with open(filename) as f:
for line in f:
try:
domain = line.split('|')[0]
ip = line.split('|')[1].strip('\n')
if ip in d:
d[ip].add(domain)
else:
d[ip] = set([domain])
except:
print (line)
pass
print("this file is finished")
for ip, domains in d.iteritems():
for domain in domains:
print("%s|%s" % (ip, domain), file=output)
最佳答案
Python 对象占用的内存比磁盘上的相同值多一点;引用计数有一点开销,并且在集合中还有每个值的缓存哈希值需要考虑。
不要将所有这些对象读入(Python)内存;改用数据库。 Python 带有一个用于 SQLite 数据库的库,使用它可以将您的文件转换为数据库。然后您可以从中构建输出文件:
import csv
import sqlite3
from itertools import islice
conn = sqlite3.connect('/tmp/ipaddresses.db')
conn.execute('CREATE TABLE IF NOT EXISTS ipaddress (domain, ip)')
conn.execute('''\
CREATE UNIQUE INDEX IF NOT EXISTS domain_ip_idx
ON ipaddress(domain, ip)''')
for filename in files:
print(filename)
with open(filename, 'rb') as f:
reader = csv.reader(f, delimiter='|')
cursor = conn.cursor()
while True:
with conn:
batch = list(islice(reader, 10000))
if not batch:
break
cursor.executemany(
'INSERT OR IGNORE INTO ipaddress VALUES(?, ?)',
batch)
conn.execute('CREATE INDEX IF NOT EXISTS ip_idx ON ipaddress(ip)')
with open(outputfile, 'wb') as outfh:
writer = csv.writer(outfh, delimiter='|')
cursor = conn.cursor()
cursor.execute('SELECT ip, domain from ipaddress order by ip')
writer.writerows(cursor)
这会以 10000 个为一组处理您的输入数据,并生成一个索引以在插入后 进行排序。生成索引需要一些时间,但它完全适合您的可用内存。
在开始时创建的 UNIQUE
索引确保只插入唯一的域 - ip 地址对(因此只跟踪每个 ip 地址的唯一域); INSERT OR IGNORE
语句会跳过数据库中已存在的任何对。
仅包含您提供的样本输入的简短演示:
>>> import sqlite3
>>> import csv
>>> import sys
>>> from itertools import islice
>>> conn = sqlite3.connect('/tmp/ipaddresses.db')
>>> conn.execute('CREATE TABLE IF NOT EXISTS ipaddress (domain, ip)')
<sqlite3.Cursor object at 0x106c62730>
>>> conn.execute('''\
... CREATE UNIQUE INDEX IF NOT EXISTS domain_ip_idx
... ON ipaddress(domain, ip)''')
<sqlite3.Cursor object at 0x106c62960>
>>> reader = csv.reader('''\
... yahoo.com|89.45.3.5
... bbc.com|45.67.33.2
... yahoo.com|89.45.3.5
... myname.com|45.67.33.2
... '''.splitlines(), delimiter='|')
>>> cursor = conn.cursor()
>>> while True:
... with conn:
... batch = list(islice(reader, 10000))
... if not batch:
... break
... cursor.executemany(
... 'INSERT OR IGNORE INTO ipaddress VALUES(?, ?)',
... batch)
...
<sqlite3.Cursor object at 0x106c62810>
>>> conn.execute('CREATE INDEX IF NOT EXISTS ip_idx ON ipaddress(ip)')
<sqlite3.Cursor object at 0x106c62960>
>>> writer = csv.writer(sys.stdout, delimiter='|')
>>> cursor = conn.cursor()
>>> cursor.execute('SELECT ip, domain from ipaddress order by ip')
<sqlite3.Cursor object at 0x106c627a0>
>>> writer.writerows(cursor)
45.67.33.2|bbc.com
45.67.33.2|myname.com
89.45.3.5|yahoo.com
关于python - 如何在 Python 中解析大于 100GB 的文件?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/26503199/