我为该项目准备了一个小数据集。它给出了
ValueError: Layer weight shape (43, 100) not compatible with provided weight shape (412457, 400)
错误。我认为标记器有问题。
train_test_split 的 X 和 Y
X = []
sentences = list(titles["title"])
for sen in sentences:
X.append(preprocess_text(sen))
y = titles['Unnamed: 1']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
这里是分词器
tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(X_train)
X_train = tokenizer.texts_to_sequences(X_train)
X_test = tokenizer.texts_to_sequences(X_test)
vocab_size = len(tokenizer.word_index) + 1 #vocab_size 43
maxlen = 100
X_train = pad_sequences(X_train, padding='post', maxlen=maxlen)
X_test = pad_sequences(X_test, padding='post', maxlen=maxlen)
所以,我的预训练 word2vec 模型具有 (412457, 400) 形状。
from numpy import array
from numpy import asarray
from numpy import zeros
from gensim.models import KeyedVectors
embeddings_dictionary = KeyedVectors.load_word2vec_format('drive/My Drive/trmodel', binary=True)
我使用预训练的 word2vec 模型而不是 GloVe。 (vocab_size:43、100,权重来自 embeddings_dictionary.vectors)
from keras.layers.recurrent import LSTM
model = Sequential()
embedding_layer = Embedding(vocab_size, 100, weights=[embeddings_dictionary.vectors], input_length=maxlen , trainable=False)
model.add(embedding_layer)
model.add(LSTM(128))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
ValueError: Layer weight shape (43, 100) not compatible with provided weight shape (412457, 400)
最佳答案
如果您想使用预训练权重,则必须将适当的大小参数传递给嵌入层,以便它可以将预训练权重矩阵分配给嵌入层的权重矩阵。
因此您必须进行以下操作:
embedding_layer = Embedding(412457, 400, weights=[embeddings_dictionary.vectors], input_length=maxlen , trainable=False)
在训练之前,您必须更改填充以使其符合Embedding
层:
maxlen = 400
X_train = pad_sequences(X_train, padding='post', maxlen=maxlen)
X_test = pad_sequences(X_test, padding='post', maxlen=maxlen)
关于python - ValueError : Layer weight shape (43, 100) 与提供的权重形状不兼容 (412457, 400),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59840678/