In English

Evaluating word-vector generation techniques for use in semantic role labeling with recurrent LSTM neural networks

Daniel Toom
Göteborg : Chalmers tekniska högskola, 2017. 57 s.
[Examensarbete på avancerad nivå]

This is an investigation of how the performance on a natural language task, semantic role labeling (SRL), is affected by how the input text is encoded. Four methods for encoding words as dense vectors of floating-point numbers are evaluated: Noisecontrastive estimation (NCE), Continuous bag-of-words (CBOW), Skip-grams and Global vectors for word representation (GloVe). Vectors are generated using a corpus of Wikipedia articles as the training set, with different values for various parameters, such as vector length, context length and training methods. In order to evaluate the generated vectors, they are then used as the input into a recurrent bi-directional neural network using LSTM neurons. This network is trained using the standard CoNLL-2005 shared task data set, and evaluated with the associated test sets. The results show that there is no large or consistent advantage to using word encodings from any of the tried methods or any of the tried parameter settings over any of the other methods or parameter settings. The SRL system that was used thus seems to be fairly robust in regard to the choice of input vectors. All methods generate vectors that outperform random vectors, however, indicating that pretraining vectors has a positive effect. The labeling accuracy is close to but slightly worse than previously published state-of-the-art performance results from the use of a similar network on the SRL task.

Nyckelord: word-vector generation, SRL, NCE, CBOW, skip-grams, GloVe, CoNll-2005, LSTM RNN.



Publikationen registrerades 2017-08-25. Den ändrades senast 2017-08-25

CPL ID: 251431

Detta är en tjänst från Chalmers bibliotek