Embeddings in NaturalLanguage ProcessingTheory and Advances in Vector Representations of Meaning

Mohammad Taher Pilehvar, Tehran Inst. of Advanced Studies

Jose Camacho-Collados,Cardiff University

Embeddings have undoubtedly been one of the most influential research areas in Natural Language Processing (NLP). Encoding information into a low-dimensional vector representation, which is easily integrable in modern machine learning models, has played a central role in the development of NLP. This new book in the Human Language Technologies series provides a high-level synthesis of the main embedding techniques in NLP.
Morgan & Claypool Publishers

This book provides a high-level synthesis of the main embedding techniques in NLP, in the broad sense. The book starts by explaining conventional word vector space models and word embeddings (e.g., Word2Vec and GloVe) and then moves to other types of embeddings, such as word sense, sentence and document, and graph embeddings. The book also provides an overview of recent developments in contextualized representations (e.g., ELMo and BERT) and explains their potential in NLP.
READ MORE AND BUY TODAY
Check if you have access via Synthesis Digital Library
RECENTLY PUBLISHED IN HUMAN LANGUAGE TECHNOLOGIES
Conversational AI: Dialogue Systems, Conversational Agents, and Chatbots Michael McTear
Statistical Significance Testing for Natural Language Processing
Rotem Dror, Lotem Peled-Cohen, Segev Shlomov, Roi Reichart
Natural Language Processing for Social Media, Third Edition
Anna Atefeh Farzindar, Diana Inkpen
Deep Learning Approaches to Text Production
Shashi Narayan, Claire Gardent

You may also like...