我尝试对超过 2.6 亿行的数据文件进行采样,创建固定大小为 1000 个样本的均匀分布样本。
我所做的如下:
import random
file = "input.txt"
output = open("output.txt", "w+", encoding = "utf-8")
samples = random.sample(range(1, 264000000), 1000)
samples.sort(reverse=False)
with open(file, encoding = "utf-8") as fp:
line = fp.readline()
count = 0
while line:
if count in samples:
output.write(line)
samples.remove(count)
count += 1
line = fp.readline()
此代码导致内存错误,没有进一步说明。为什么这段代码会出现内存错误?
据我所知,它应该逐行读取我的文件。该文件有 28.4GB,因此无法整体读取,这就是我采用 readline() 方法的原因。我该如何解决这个问题,以便可以处理整个文件,无论其大小如何?\
编辑: 最新的尝试引发了此错误,该错误实际上与我迄今为止收到的每个先前错误消息相同
MemoryError Traceback (most recent call last)
<ipython-input-1-a772dad1ea5a> in <module>()
12 with open(file, encoding = "utf-8") as fp:
13 count = 0
---> 14 for line in fp:
15 if count in samples:
16 output.write(line)
~\Anaconda3\lib\codecs.py in decode(self, input, final)
320 # decode input (taking the buffer into account)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
324 self.buffer = data[consumed:]
MemoryError:
最佳答案
所以看起来这一行会导致巨大的内存峰值:
samples = random.sample(range(1, 264000000), 1000)
我的猜测是,这个调用迫使 python 在进行采样之前创建该范围内的所有 264M 整数。尝试使用此代码在相同范围内进行采样而无需替换:
from random import randint
file = "input.txt"
output = open("output.txt", "w+", encoding = "utf-8")
samples = set()
while len(samples) < 1000:
random_num = randint(0, 264000000)
if random_num not in samples:
samples.add(random_num)
with open(file, encoding = "utf-8") as fp:
count = 0
for line in fp:
if count in samples:
output.write(line)
samples.remove(count)
count += 1
if not samples: break
关于使用 readline 进行 python 采样会出现内存错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53093236/