![]() Experimental results show that (i) HSS outperforms state-of-the-art measures for measuring semantic similarity in taxonomy on a benchmark intrinsic evaluation and (ii) the embedding selected through TaxoVec achieves a clear victory against embeddings selected by the competing measures on benchmark NLP tasks. the adopted semantic similarity measure). The performances of those tasks constitute an extrinsic evaluation of the criteria for the selection of the best embedding (i.e. We use the vectors of the four selected embeddings as machine learning model features to perform several NLP tasks. To evaluate TaxoVec, we repeat the embedding selection process using three other semantic similarity benchmark measures. Then, we train several embedding models on a text corpus and select the best model, that is, the model that maximizes the correlation between the HSS and the cosine similarity of the pair of words that are in both the taxonomy and the corpus. In TaxoVec, we first compute the pairwise semantic similarity between taxonomic words through a new measure we previously developed, the Hierarchical Semantic Similarity ( HSS), which we show outperforms previous measures on several benchmark tasks. In this paper, we propose and implement TaxoVec, a novel approach to selecting word embeddings based on their ability to preserve taxonomic similarity. Recently, several scholars have proposed new approaches to combine those resources into unified representation preserving distributional and knowledge-based lexical features. Lexical taxonomies and distributional representations are largely used to support a wide range of NLP applications, including semantic similarity measurements.
0 Comments
Leave a Reply. |