python - 将 Universal Sentence Encoder 保存到 Tflite 或将其提供给 tensorflow api

标签 python tensorflow machine-learning tensorflow-lite tensorflow-hub

我有这段代码可以使用预构建的通用句子编码器来查找句子相似度。它需要一个 .txt 文件作为输入。执行余弦相似度,然后接受用户的输出以根据用户输入查询找到最相似的句子。这是代码:

# tensroflow hub module for Universal sentence Encoder
module_url = "https://tfhub.dev/google/universal-sentence-encoder-large/3" #@param ["https://tfhub.dev/google/universal-sentence-encoder/2", "https://tfhub.dev/google/universal-sentence-encoder-large/3"]

def get_features(texts):
    if type(texts) is str:
        texts = [texts]
    with tf.Session() as sess:
        sess.run([tf.global_variables_initializer(), tf.tables_initializer()])
        return sess.run(embed(texts))
    def remove_stopwords(stop_words, tokens):
    res = []
    for token in tokens:
        if not token in stop_words:
            res.append(token)
    return res

def process_text(text):
    text = text.encode('ascii', errors='ignore').decode()
    text = text.lower()
    text = re.sub(r'http\S+', ' ', text)
    text = re.sub(r'#+', ' ', text )
    text = re.sub(r'@[A-Za-z0-9]+', ' ', text)
    text = re.sub(r"([A-Za-z]+)'s", r"\1 is", text)
    #text = re.sub(r"\'s", " ", text)
    text = re.sub(r"\'ve", " have ", text)
    text = re.sub(r"won't", "will not ", text)
    text = re.sub(r"isn't", "is not ", text)
    text = re.sub(r"can't", "can not ", text)
    text = re.sub(r"n't", " not ", text)
    text = re.sub(r"i'm", "i am ", text)
    text = re.sub(r"\'re", " are ", text)
    text = re.sub(r"\'d", " would ", text)
    text = re.sub(r"\'ll", " will ", text)
    text = re.sub('\W', ' ', text)
    text = re.sub(r'\d+', ' ', text)
    text = re.sub('\s+', ' ', text)
    text = text.strip()
    return text

def lemmatize(tokens):
    lemmatizer = nltk.stem.WordNetLemmatizer()
    lemma_list = []
    for token in tokens:
        lemma = lemmatizer.lemmatize(token, 'v')
        if lemma == token:
            lemma = lemmatizer.lemmatize(token)
        lemma_list.append(lemma)
    # return [ lemmatizer.lemmatize(token, 'v') for token in tokens ]
    return lemma_list


def process_all(text):
    text = process_text(text)
    return ' '.join(remove_stopwords(stop_words, text.split()))

process_text("Hello! Who are you?")

with open('/content/sample_data/training.txt') as f:
...     text = [i.strip() for i in f]
...     

data_processed = list(map(process_text, text))
len(data_processed)

BASE_VECTORS = get_features(text)

def cosine_similarity(v1, v2):
    mag1 = np.linalg.norm(v1)
    mag2 = np.linalg.norm(v2)
    if (not mag1) or (not mag2):
        return 0
    return np.dot(v1, v2) / (mag1 * mag2)

def test_similiarity(text1, text2):
    vec1 = get_features(text1)[0]
    vec2 = get_features(text2)[0]
    print(vec1.shape)
    return cosine_similarity(vec1, vec2)

def semantic_search(query, data, vectors):
    query = process_text(query)
    print("Extracting features...")
    query_vec = get_features(query)[0].ravel()
    res = []
    for i, d in enumerate(data):
        qvec = vectors[i].ravel()
        sim = cosine_similarity(query_vec, qvec)
        res.append((sim, d[:100], i))
    return sorted(res, key=lambda x : x[0], reverse=True)

semantic_search("da vinci", data_processed, BASE_VECTORS)

我想保存模型并将其转换为 tflite。我进行了很多研究,但未能找到任何解决方案。或者如何将其提供给 tensorflow api。

最佳答案

继续的一个选项是将模型保存在 SavedModel format 中,然后将生成的模型转换为 tflite。请注意,转换模型的能力可能取决于模型正在使用的操作,并且某些模型架构可能无法转换为 tflite format .

关于python - 将 Universal Sentence Encoder 保存到 Tflite 或将其提供给 tensorflow api,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60991417/

相关文章:

machine-learning - 一个特征在不同的范围内有不同的含义

tensorflow anaconda安装中的Python模块

python - kivy python3检测鼠标滚轮

python - 使用 PyHive 和 SqlAlchemy 创建表

python - 将值转换为行、列和字符

python - 比较 Conv2D 与 Tensorflow 和 PyTorch 之间的填充

python - model.compile() 在 keras tensorflow 中做什么?

hadoop - 使用 Mapreduce 计算期望最大化的高斯混合模型

r - 用于套索回归的 Predict.lars 命令 : what are the "s" and "p" parameters?

python - 从linux中的 Pandas 数据框列中减去日期