我有一个包含很多节的文件,格式如下:
section_name_1 <attribute_1:value> <attribute_2:value> ... <attribute_n:value> {
field_1 finish_num:start_num some_text ;
field_2 finish_num:start_num some_text ;
...
field_n finish_num:start_num some_text;
};
section_name_2 ...
... and so on
文件可能有数十万行。每个部分的属性和字段数可以不同。我想建立一些字典来保存其中的一些值。我已经有一个单独的字典,其中包含所有可能的“属性”值。
import os, re
from collections import defaultdict
def mapFile(myFile, attributeMap_d):
valueMap_d = {}
fieldMap_d = defaultdict(dict)
for attributeName in attributeMap_d:
valueMap_d[attributeName] = {}
count = 0
with open(myFile, "rb") as fh:
for line in fh:
# only look for lines with <
if '<' in line:
# match all attribute:value pairs inside <> brackets
attributeAllMatch = re.findall(r'<(\S+):(\S+)>', line)
attributeAllMatchLen = len(attributeAllMatch)
count = 0
sectionNameMatch = re.match(r'(\S+)\s+<', line)
# store each section name and its associated attribute and value into dict
for attributeName in attributeMap_d:
for element in attributeAllMatch:
if element[0] == attributeName:
valueMap_d[attributeName][sectionNameMatch.group(1).rstrip()] = element[1].rstrip()
count += 1
# stop searching if all attributes in section already matched
if count == attributeAllMatchLen: break
nextLine = next(fh)
#in between each squiggly bracket, store all the field names and start/stop_nums into dict
#this while loop is very slow...
while not "};" in nextLine:
fieldMatch = re.search(r'(\S+)\s+(\d+):(\d+)', nextLine)
if fieldMatch:
fieldMap_d[sectionNameMatch.group(1)][fieldMatch.group(1)] = [fieldMatch.group(2), fieldMatch.group(3)]
nextLine = next(fh)
return valueMap_d
我的问题是,匹配所有字段值的 while 循环明显比其余代码慢:根据 cProfile,如果我删除 while 循环,则为 0.5 秒对 2.2 秒。我想知道我可以做些什么来加快速度。
最佳答案
当您需要花哨的模式匹配时,正则表达式非常有用,但当您不需要时,使用 str
方法解析文本会更快。下面是一些代码,比较了使用正则表达式进行字段解析与使用 str.split
进行字段解析的时间。
首先,我创建了一些伪造的测试数据,并将其存储在 rows
列表中。这样做使我的演示代码比从文件中读取数据更简单,但更重要的是,它消除了文件读取的开销,因此我们可以更准确地比较解析速度。
顺便说一句,您应该将 sectionNameMatch.group(1)
保存在字段解析循环之外,而不必在每个字段行上都进行调用。
首先,我将说明我的代码正确地解析了数据。 :)
import re
from pprint import pprint
from time import perf_counter
# Make some test data
num = 10
rows = []
for i in range(1, num):
j = 100 * i
rows.append(' field_{:03} {}:{} some_text here ;'.format(i, j, j - 50))
rows.append('};')
print('\n'.join(rows))
# Select whether to use regex to do the parsing or `str.split`
use_regex = True
print('Testing {}'.format(('str.split', 'regex')[use_regex]))
fh = iter(rows)
fieldMap = {}
nextLine = next(fh)
start = perf_counter()
if use_regex:
while not "};" in nextLine:
fieldMatch = re.search(r'(\S+)\s+(\d+):(\d+)', nextLine)
if fieldMatch:
fieldMap[fieldMatch.group(1)] = [fieldMatch.group(2), fieldMatch.group(3)]
nextLine = next(fh)
else:
while not "};" in nextLine:
if nextLine:
data = nextLine.split(maxsplit=2)
fieldMap[data[0]] = data[1].split(':')
nextLine = next(fh)
print('time: {:.6f}'.format(perf_counter() - start))
pprint(fieldMap)
输出
field_001 100:50 some_text here ;
field_002 200:150 some_text here ;
field_003 300:250 some_text here ;
field_004 400:350 some_text here ;
field_005 500:450 some_text here ;
field_006 600:550 some_text here ;
field_007 700:650 some_text here ;
field_008 800:750 some_text here ;
field_009 900:850 some_text here ;
};
Testing regex
time: 0.001946
{'field_001': ['100', '50'],
'field_002': ['200', '150'],
'field_003': ['300', '250'],
'field_004': ['400', '350'],
'field_005': ['500', '450'],
'field_006': ['600', '550'],
'field_007': ['700', '650'],
'field_008': ['800', '750'],
'field_009': ['900', '850']}
这是 use_regex = False
的输出;我不会费心重新打印输入数据。
Testing str.split
time: 0.000100
{'field_001': ['100', '50'],
'field_002': ['200', '150'],
'field_003': ['300', '250'],
'field_004': ['400', '350'],
'field_005': ['500', '450'],
'field_006': ['600', '550'],
'field_007': ['700', '650'],
'field_008': ['800', '750'],
'field_009': ['900', '850']}
现在进行真正的测试。我将设置 num = 200000
并注释掉打印输入和输出数据的行。
Testing regex
time: 3.640832
Testing str.split
time: 2.480094
如您所见,正则表达式版本慢了大约 50%。
这些时间是在我运行 Python 3.6.0 的古老 2GHz 32 位机器上获得的,因此您的速度可能会有所不同。 ;) 如果您的 Python 没有 time.perf_counter
,您可以使用 time.time
。
关于python - 如何更快地遍历这个文本文件?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47490341/