我已使用 nltk 获取标记化关键字列表。输出是
['Natural', 'Language', 'Processing', 'with', 'PythonNatural', 'Language', 'Processingwith', 'PythonNatural', 'Language', 'Processing', 'with', 'Python', 'Editor', ':', 'Production', 'Editor', ':', 'Copyeditor']
我有一个包含以下关键字的文本文件 keyword.txt:
Processing
Editor
Pyscripter
Language
Registry
Python
我如何将标记化获得的关键字与我的 keyword.txt 文件进行匹配,以便为匹配的关键字创建第三个文件。
这是我一直在研究的程序,但它创建了这两个文件的联合:
import os
with open(r'D:\file3.txt', 'w') as fout:
keywords_seen = set()
for filename in r'D:\File1.txt', r'D:\Keyword.txt':
with open(filename) as fin:
for line in fin:
keyword = line.strip()
if keyword not in keywords_seen:
fout.write(line + "\n")
keywords_seen.add(keyword)
最佳答案
How can i match the keywords obtained from tokenization with my keyword.txt file such that a third file is created for the matched keywords
这是一个简单的解决方案,根据需要调整文件名。
# these are the tokens:
tokens = set(['Natural', 'Language', 'Processing', 'with', 'PythonNatural', 'Language', 'Processingwith', 'PythonNatural', 'Language', 'Processing', 'with', 'Python', 'Editor', ':', 'Production', 'Editor', ':', 'Copyeditor'])
# create a set containing the keywords
with open('keywords.txt', 'r') as keywords:
keyset = set(keywords.read().split())
# write outputfile
with open('matches.txt', 'w') as matches:
for word in keyset:
if word in tokens:
matches.write(word + '\n')
这将生成一个文件matches.txt
,其中包含
Language
Processing
Python
Editor
关于为两个文件中存在的内容执行关键字匹配的 Python 程序,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23929572/