10 October 2018 4 845 Report

Hi guys,

I am dealing with a NLP problem which identify the keyword in a sentence.

Eg:

Input: "I love playing PUBG - amazing game".

Output: "PUBG"

(This is just a example, the real data is not English)

I made the bag of word for the whole input and encoded the data. Input is the vector of word index in the bag of word and Output is the one-hot vector which indicate the location of keyword in the sentence.

Eg for above data pairs:

Input: [121, 148, 224, 240, 88, 101]

Output: [0, 0, 0, 1, 0, 0]

My data have about 10.000 records. I tested for some simple recurrent neural network model for this data, the below is the best one.

model = Sequential() model.add(Embedding(max_features, 32)) model.add(LSTM(64, return_sequences=True)) model.add(LSTM(64)) model.add(Dropout(0.5)) model.add(Dense(maxlen, activation='relu')) ​model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics =['acc']) model.summary() res = model.fit(input_train_pad, y_train_pad, epochs = 10, batch_size=128, validation_split=0.2)

This is the output of the above code:

_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_7 (Embedding) (None, None, 32) 192000 _________________________________________________________________ lstm_13 (LSTM) (None, None, 64) 24832 _________________________________________________________________ lstm_14 (LSTM) (None, 64) 33024 _________________________________________________________________ dropout_7 (Dropout) (None, 64) 0 _________________________________________________________________ dense_7 (Dense) (None, 300) 19500 ================================================================= Total params: 269,356 Trainable params: 269,356 Non-trainable params: 0

Train on 7719 samples, validate on 1930 samples Epoch 1/10 7719/7719 [==============================] - 47s 6ms/step - loss: 3.6325 - acc: 0.1965 - val_loss: 3.1061 - val_acc: 0.3187 Epoch 2/10 7719/7719 [==============================] - 45s 6ms/step - loss: 3.0287 - acc: 0.2731 - val_loss: 3.0797 - val_acc: 0.3187 Epoch 3/10 7719/7719 [==============================] - 49s 6ms/step - loss: 3.0226 - acc: 0.3311 - val_loss: 2.9471 - val_acc: 0.3187 Epoch 4/10 7719/7719 [==============================] - 48s 6ms/step - loss: 2.9734 - acc: 0.3342 - val_loss: 3.0742 - val_acc: 0.3187 Epoch 5/10 7719/7719 [==============================] - 49s 6ms/step - loss: 2.9737 - acc: 0.3342 - val_loss: 2.9441 - val_acc: 0.3187 Epoch 6/10 7719/7719 [==============================] - 56s 7ms/step - loss: 2.9568 - acc: 0.3342 - val_loss: 2.9393 - val_acc: 0.3187 Epoch 7/10 7719/7719 [==============================] - 57s 7ms/step - loss: 2.9641 - acc: 0.3342 - val_loss: 2.9424 - val_acc: 0.3187 Epoch 8/10 7719/7719 [==============================] - 54s 7ms/step - loss: 2.9629 - acc: 0.3342 - val_loss: 2.9524 - val_acc: 0.3187 Epoch 9/10 7719/7719 [==============================] - 55s 7ms/step - loss: 2.9641 - acc: 0.3342 - val_loss: 2.9429 - val_acc: 0.3187 Epoch 10/10 7719/7719 [==============================] - 54s 7ms/step - loss: 2.9554 - acc: 0.3342 - val_loss: 2.9375 - val_acc: 0.3187

The performance is not quite good.

Could you please give me some advises?

Should I change the approaching method?

Thank you so much!

Similar questions and discussions