python - 在字符串列表中查找唯一 n-gram 的最小列表

标签 python algorithm

我有一个 50K 的字符串列表(城市名称),我需要一个最小的字符三元组(最好是 n-gram)列表,其中每个字符串至少被一个三元组命中一次。考虑以下列表: ['阿姆斯特丹','鹿特丹','哈勒姆','乌得勒支','格罗宁根']

识别三元组的列表是 4 长,应该是(可能的替代方案):

['ter', 'haa', 'utr', 'gro']

我认为我的解决方案找到了正确的正确答案,但在其他列表中使用时给出了错误的答案。

from collections import Counter

def identifying_grams(list, n=3):

    def f7(seq):
        seen = set()
        seen_add = seen.add
        return [x for x in seq if not (x in seen or seen_add(x))]

    def ngrams(text, n=3):
        return [text[i:i + n] for i in range(len(text) - n + 1)]

    hits = []
    trigrams = []
    for item in list:
      #  trigrams += ngrams(item)
        trigrams += f7(ngrams(item))

    counts = Counter(trigrams).most_common()

    for trigram, count in counts:
        items = []
        for item in list:
            if trigram in item:
                hits.append(trigram)
                items.append(item)
        for i in items:
            list.remove(i)

    return(f7(hits))

list1 = ['amsterdam','rotterdam','haarlem','utrecht','groningen']
print(identifying_grams(list1))
# Good, we get: ['ter', 'haa', 'utr', 'gro']

list2 = ['amsterdam','schiedam']
print(identifying_grams(list2))
# Good, we get: ['dam']

list3 = ['amsterdam','schiedam','terwolde','wolstad']
print(identifying_grams(list3))
# Ouch, we get: ['ter', 'dam', 'wol']
# this should be ['dam', 'wol'] as this is only 2 trigrams that identify the list...

到目前为止,我得到了两个答案,但它们都有缺陷。 Rupesh 的一个适用于小于 10 项的列表。我的列表有超过 50K 项。来自 mujjiga 的人确实提出了解决方案,尽管不是完美的解决方案。

Python 忍者的赏金,他们提出了一个可扩展的完美解决方案。 如果它表现良好并且每次运行时都给出相同的解决方案,那就加分!

最佳答案

这是对@mujjiga 答案的理论分析:

您可以创建共享相同 ngram 的单词类别。您想选择涵盖整个单词集的最少数量的类(即最少数量的 ngram)。这是set cover problem .不幸的是,这个问题是 NP-hard(不是 NP-complete ,感谢@mujjiga)。 (编辑:因此,没有已知的解决方案可以在合理的时间内为您提供预期的结果。)贪婪算法几乎是最好的解决方案(参见 https://cs.stackexchange.com/questions/49777/is-greedy-algorithm-the-best-algorithm-for-set-cover-problem)。

请注意,即使是贪心算法也可能给出奇怪的结果。取集合 {a, b}, {b, c}, {c, d} 和超集 {a, b, c, d}。这三个子集是最大的。如果您首先采用 {b, c},则需要另外两个子集来覆盖超集。如果你取 {a, b}{c, d},两个子集就足够了。

让我们使用贪心算法,并考虑实现。创建将 ngram 映射到单词的字典的代码非常简单:

all_words= ['amsterdam','schiedam','werkendam','amstelveen','schiebroek','werkstad','den haag','rotjeknor','gouda']
n=3
words_by_ngram = {}
for word in all_words:
    for ngram in (word[i:i+n] for i in range(0, len(word)-n+1)):
        words_by_ngram.setdefault(ngram, set()).add(word)

如果键 ngram 存在,则 setdefault 等效于 get,否则创建一个空集。这是 O(|all_words|*|len max word|) 复杂度。

现在,我们要获取单词最多的 ngram,然后从字典中删除这些单词。重复直到你得到你想要的单词。

这是简单的版本:

s = set(all_words) # the target
gs = set()
d = words_by_ngram.copy() # for the display
while s:
    # take the the best ngram
    ngram, words = max(d.items(), key=lambda i: len(i[1])) # sort on word count
    # remove the words from the dictionary and delete the ngrams whose words have been already found
    d = {k:v for k, v in ((k, v - words) for k, v in d.items()) if len(v)}
    gs.add(ngram) # add the ngram to the result
    s -= words # remove the words from the target

# check
assert set().union(*[words_by_ngram[g] for g in gs]) == set(all_words)
# display
for g in gs:
    print("{} -> {}".format(g, words_by_ngram[g]))

输出:

ams -> {'amstelveen', 'amsterdam'}
gou -> {'gouda'}
wer -> {'werkstad', 'werkendam'}
rot -> {'rotjeknor'}
dam -> {'amsterdam', 'werkendam', 'schiedam'}
sch -> {'schiebroek', 'schiedam'}
den -> {'den haag'}

第二步的复杂度为 O(|all_words|*|ngrams|),因为循环查找最大值和字典的更新。因此,总体复杂度为 O(|all_words|*|ngrams|)

使用优先级队列可以降低复杂性。检索最佳 ngram 的成本为 O(1),但更新映射到 ngram 的单词的 len 具有优先级 O(lg |ngrams| ):

import heapq
class PriorityQueue:
    """Adapted from https://docs.python.org/3/library/heapq.html#priority-queue-implementation-notes
    A prority of 1 invalidates the entries
    """
    def __init__(self, words_by_ngram):
        self._d = {ngram:[-len(words), (ngram, words)] for ngram, words in words_by_ngram.items()}
        self._pq = list(self._d.values())
        heapq.heapify(self._pq)

    def pop(self):
        """get the ngram, words tuple with the max word count"""
        minus_len, (ngram, words) = heapq.heappop(self._pq)
        while minus_len == 1: # entry is not valid
            minus_len, (ngram, words) = heapq.heappop(self._pq)
        return ngram, words

    def update(self, ngram, words_to_remove):
        """remove the words from the sets and update priorities"""
        del self._d[ngram]
        ngrams_to_inspect = set(word[i:i+n] for i in range(0, len(word)-n+1)
                        for word in words_to_remove)
        for ngram in ngrams_to_inspect:
            if ngram not in self._d: continue
            self._d[ngram][0] = 1 # use the reference to invalidate the entry
            [L, (ngram, words)] = self._d[ngram]
            words -= words_to_remove
            if words:
                self._d[ngram] = [-len(words), (ngram, words)] # new entry
                heapq.heappush(self._pq, self._d[ngram]) # add to the pq (O(lg ngrams))
            else: # nothing left: remove it from dict
                del self._d[ngram]


pq = PriorityQueue(words_by_ngram)
gs = set()
s = set(all_words) # the target
while s:
    # take the the best ngram
    ngram, words = pq.pop()
    gs.add(ngram) # add the ngram to the result
    s -= words # remove the words from the target
    # remove the words from the dictionary and update priorities
    pq.update(ngram, words)

使用此代码,总体优先级降到 O(|all_words|*|lg ngrams|)。话虽如此,我很想知道这是否比带有 50k 项目的天真以前的版本更快。

关于python - 在字符串列表中查找唯一 n-gram 的最小列表,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55140208/

相关文章:

algorithm - 预留分配算法

python - 学习影响给定列表值的两个数据帧之间关系的最佳方法是什么?

python - 如何从张量中提取子矩阵?

python Pandas : How to groupby and count and select a portion of counts?

Python 在不检查父类(super class)的情况下确定 isintance

Python Pandas - 如何从序列创建数据框

python - git merge 与我们的数据库文件冲突(多个开发人员)

php - 处理学校 "letter days"的算法帮助函数。一个 6 天的重复周

algorithm - Stooge Sort 的稳定实现?

arrays - 使用数组中左右索引给出的约束最大化权重总和