问题:
我正在从事一个数据分析项目,该项目要求我将未知单词的子字符串与好词和坏词的语料库进行比较。
我最初生成了 4 个列表,并将它们以 pickle 格式存储在磁盘中。
-rw-rw-r-- 1 malware_corpus malware_corpus 189M May 4 13:11 clean_a.pkl
-rw-rw-r-- 1 malware_corpus malware_corpus 183M May 4 13:12 clean_b.pkl
-rw-rw-r-- 1 malware_corpus malware_corpus 1.7M Apr 30 11:12 data_backup.csv
-rw-rw-r-- 1 malware_corpus malware_corpus 2.9M May 4 13:13 data.csv
-rw-rw-r-- 1 malware_corpus malware_corpus 231M May 4 13:12 mal_a.pkl
-rw-rw-r-- 1 malware_corpus malware_corpus 232M May 4 13:13 mal_b.pkl
因此,在我的代码中,每当出现新字符串时,我都会采用这 4 个列表,并将子字符串与这 4 个列表进行比较并计算分数。由于所有这 4 个列表都存储在内存中,程序速度很慢
此外,每个列表都有数百万个单词,如果我想进行搜索,我需要花费更长的时间,因为它需要 O(n) 时间。
所需解决方案:
- 有什么有效的方法来存储这 4 个列表,这样它们就不会占用我的内存。
- 是否有更好的方法来搜索 4 个列表中的字符串。
- 如何在 Python 中访问大型列表。
代码部分:
def create_corpus(self):
"""corpus
:param domain: Doamin passed will be split and words are stored in
corpus.
"""
with open(os.path.join(os.path.dirname(os.path.realpath(__file__)),'utils/x.txt'),'r') as f:
for line in f:
line = line.rstrip()
self.line_x = self.calculate_xs(line)
for i in self.line_x:
self.clean_xs.append(i)
self.line_y = self.calculate_ys(line)
for i in self.line_y:
self.clean_ys.append(i)
with open(os.path.join(os.path.dirname(os.path.realpath(__file__)),'utils/y.txt'),'r') as f:
for line in f:
line = line.rstrip()
self.line_x = self.calculate_x(line)
for i in self.line_x:
self.mal_xs.append(i)
self.line_y = self.calculate_y(line)
for i in self.line_y:
self.mal_ys.append(i)
# Store the Datasets in pickle Formats
with open(os.path.join(os.path.dirname(os.path.realpath(__file__)),\
'utils/clean_x.pkl'),'wb') as f:
pickle.dump(self.clean_xs , f)
with open(os.path.join(os.path.dirname(os.path.realpath(__file__)),\
'utils/clean_ys.pkl'),'wb') as f:
pickle.dump(self.clean_ys , f)
with open(os.path.join(os.path.dirname(os.path.realpath(__file__)),\
'utils/mal_xs.pkl'),'wb') as f:
pickle.dump(self.mal_xs , f)
with open(os.path.join(os.path.dirname(os.path.realpath(__file__)),\
'utils/mal_ys.pkl'),'wb') as f:
pickle.dump(self.mal_ys, f)
return 1
def compare_score_function(self,domain):
self.domain = domain
self.g_freq = {}
self.b_score = 0.0
from collections import Counter
for x in self.substrings_of_domain:
self.g_freq[x] = {}
self.g_freq[x]['occur'] = self.clean_x.count(x)
self.g_freq[x]['freq'] = self.clean_x.count(x)/len(self.clean_x)
for key,value in self.g_freq.iteritems():
self.b_score += value['freq']
return self.b_score
def calculate_x(self,domain):
self.domain = self.clean_url(domain)
self.bgrm = list(ngrams(self.domain,2))
self.bgrm = [''.join(a) for a in self.bgrm ]
return self.bgrm
def calculate_y(self,domain):
self.domain = self.clean_url(domain)
self.tgrm = list(ngrams(self.domain,3))
self.tgrm = [''.join(a) for a in self.tgrm]
return self.tgrm
示例说明
- clean_x_list = ['ap','pp','pl','le','bo','xl','ap']
- clean_y_list = ['apa','ppa','fpl','lef','bfo','xdl','mpd']
- bad_x_list = ['ti','qw','zx','qa','qa','qa','uy']
- bad_y_list = ['zzx','zxx','qww','qww','qww','uyx','uyx']
假设这些是我的 4 个列表:
我的新字符串来了——假设是苹果 - 现在我将计算苹果的 x 个单词 => ['ap','pp','pl','le'] - 现在我将计算 apple => ['app','ppl','ple','lea']
的 y 个单词- 现在我将在 clean_x_list 和 bad_x_list 中搜索 apple 的每个 x 词,即 ['ap','pp','pl','le']
然后我将计算频率和出现次数
clean_x_list = 2 中出现 ap
- clean_x_list 中 ap 的频率 = 2/7
- bad_x_list = 0 中出现 ap
- bad_x_list 中出现 ap = 0/7
类似地,我计算其他词的出现次数和频率,最后求和
最佳答案
考虑对列表进行排序,并使用 bisect
用于搜索您的列表。在这种情况下,最坏情况的查找时间为 O(log n)。
关于 python : How to search a large array in effiecient way?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50170079/