This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias.
This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees ("this algorithm never does too badly") than about useful rules of thumb ("in this case this algorithm may perform really well"). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant.
Table of Contents
Supervised and Unsupervised Prediction
Learning under Bias
Learning under Unknown Bias
Evaluating under Bias
About the Author(s)Anders Sogaard
, University of Copenhagen
Anders Sogaard was born in Odense, Denmark in 1981. He has worked as a Senior Researcher at the University of Potsdam and is now an Associate Professor at the University of Copenhagen. His research areas include semi-supervised structure prediction, bias correction, and cross-language adaptation of language technology.
This wealth of information is packed into a slender volume, comprising only 80 pages, excluding front material and bibliography. As an unavoidable result, the text is extremely terse. For example, Kullback-Leibler divergence, Jensen-Shannon divergence, variance, and covariance matrices, are accorded one sentence each. For this reason, the book is not appropriate as an introduction for real beginners. But it has unique value as a condensation of key points for an advanced student who would like to get started on serious research in the area, or for an established researcher who would like to catch up with the current state of the art in semi-supervised learning and domain adaptation.Steven Abney - University of Michigan