我想搜索一个非常大的文本文件,其中 SHA1
哈希使用 Python 按哈希值排序。文本文件有 10GB
和 500 000 000
行。每行看起来像这样:
000009F0DA8DA60DFCC93B39F3DD51A088ED3FD9:27
我由此比较给定的哈希值是否出现在文件中。我用 BinarySearch 试过了,但它只适用于一个小的测试文件。如果我使用 10GB
的文件,搜索时间会太长,并且有时会终止进程,因为超过了 16GB RAM
。
f=open('testfile.txt', 'r')
text=f.readlines()
data=text
#print data
x = '000009F0DA8DA60DFCC93B39F3DD51A088ED3FD9:27'
def binarySearch(data, l, r, x):
while l <= r:
mid = l + (r - l)/2;
# Check if x is present at mid
if data[mid] == x:
return mid
# If x is greater, ignore left half
elif data[mid] < x:
l = mid + 1
#print l
# If x is smaller, ignore right half
else:
r = mid - 1
#print r
# If we reach here, then the element
# was not present
return -1
result = binarySearch(data,0, len(data)-1, x)
if result != -1:
print "Element is present at index % d" % result
else:
print "Element is not present in array"
有没有办法将 10GB
文本文件一次加载到 RAM 中并一遍又一遍地访问它?我有可用的 16GB RAM
。这应该够了吧?
我还能做些什么来加快搜索速度吗?不幸的是我不知道了。
最佳答案
将您的示例输入作为 input.txt
,如下所示
000000005AD76BD555C1D6D771DE417A4B87E4B4:4
00000000A8DAE4228F821FB418F59826079BF368:3
00000000DD7F2A1C68A35673713783CA390C9E93:630
00000001E225B908BAC31C56DB04D892E47536E0:5
00000006BAB7FC3113AA73DE3589630FC08218E7:2
00000008CD1806EB7B9B46A8F87690B2AC16F617:4
0000000A0E3B9F25FF41DE4B5AC238C2D545C7A8:15
0000000A1D4B746FAA3FD526FF6D5BC8052FDB38:16
0000000CAEF405439D57847A8657218C618160B2:15
0000000FC1C08E6454BED24F463EA2129E254D43:40
然后删除计数,使您的文件变为(in.txt
下面):
000000005AD76BD555C1D6D771DE417A4B87E4B4
00000000A8DAE4228F821FB418F59826079BF368
00000000DD7F2A1C68A35673713783CA390C9E93
00000001E225B908BAC31C56DB04D892E47536E0
00000006BAB7FC3113AA73DE3589630FC08218E7
00000008CD1806EB7B9B46A8F87690B2AC16F617
0000000A0E3B9F25FF41DE4B5AC238C2D545C7A8
0000000A1D4B746FAA3FD526FF6D5BC8052FDB38
0000000CAEF405439D57847A8657218C618160B2
0000000FC1C08E6454BED24F463EA2129E254D43
这将确保每个条目的大小都是固定的。
现在您可以使用基于 mmap 的文件读取方法,如此处 https://docs.python.org/3/library/mmap.html
import mmap
import os
FIELD_SIZE=40+1 # also include newline separator
def binarySearch(mm, l, r, x):
while l <= r:
mid = int(l + (r - l)/2);
# Check if x is present at mid
mid_slice = mm[mid*FIELD_SIZE:(mid+1)*FIELD_SIZE]
mid_slice = mid_slice.decode('utf-8').strip()
# print(mid_slice)
if mid_slice == x:
return mid
# If x is greater, ignore left half
elif mid_slice < x:
l = mid + 1
#print l
# If x is smaller, ignore right half
else:
r = mid - 1
#print r
# If we reach here, then the element
# was not present
return -1
# text=f.readlines()
# data=text
#print data
x = '0000000CAEF405439D57847A8657218C618160B2'
with open('in.txt', 'r+b') as f:
mm = mmap.mmap(f.fileno(), 0)
f.seek(0, os.SEEK_END)
size = f.tell()
result = binarySearch(mm, 0, size/FIELD_SIZE, x)
if result != -1:
print("Element is present at index % d" % result)
else:
print("Element is not present in array")
输出:
$ python3 find.py
Element is present at index 8
由于文件在内存中没有被完全读取,所以不会出现内存不足的错误。
关于python - 使用 python 在大型 .txt 中进行二进制搜索(按哈希排序),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58140529/