5 years ago

Automatic Extraction of Examples for Word Sense Disambiguation

Automatic Extraction of Examples for Word Sense Disambiguation


CHAPTER 2. BASIC APPROACHES TO WORD SENSE DISAMBIGUATION 31 bootstrapping algorithms (Blum and Mitchell, 1998; Abney, 2002) or transductive learning algo- rithms (Joachims, 1999; Blum and Chawla, 2001; Joachims, 2003). Some general research on that particular family of methods can be found as well in (Chapelle et al., 2006; Seeger, 2001; Abney, 2008). A very valuable survey on the semi-supervised learning methods is presented by Zhu (2008). The author divides this family of learning methods in several groups most important of which are: Generative Models, Self-Training and Co-Training. We will again not discuss those methods in detail, but it is beneficial for the understanding of our work to attain an overview of the most important ones of them. Generative Models (Nigam et al., 2000; Baluja, 1998; Fujino et al., 2005) are considered to be among the oldest semi-supervised learning methods. They assume a model with an identifiable mixture distribution. In case a big amount of unlabeled data is present, the mixture components can be identified. Consequently, a single labeled example per component can be used in order determination of the mixture distribution to be performed. Such mixture components are as well known as soft clusters. Self-Training (Yarowsky, 1995; Riloff et al., 2003; Maeireizo et al., 2004; Rosenberg et al., 2005) is based on the training of a classifier with a small amount of labeled data, and its further usage for the classification of the unlabeled set of examples. The examples that are most confi- dently labeled are then added to the training set. The classifier is then trained again on the new set and the procedure is repeated again. Co-Training (Blum and Mitchell, 1998; Mitchell, 1999) as Zhu (2008) reports is centered around several assumptions: features can be split into two sets, each sub-feature set is suffi- cient to train a good classifier and the two sets are conditionally independent given the class. Thus those two sub-feature sets are used to train two separate classifiers. Then both of the clas- sifiers label the yet not annotated data and try to teach each other with the examples they most confidently labeled. The classifiers are then trained again and the same process repeats. In Section 6.1 we provide a rough overview of our system, which we classify as well as semi- supervised word sense disambiguation.

Chapter 3 Comparability for WSD Systems Along our survey we mentioned multiple word sense disambiguation systems. This wide range of approaches, however, leads to a great number of difficulties when they need to be compared. Still, how to compare systems that differ so much from each other? Can you say that cow milk is better than soy milk, when they are obtained in two entirely contrary ways? Or can you then compare the chocolate that is made out of cow and soy milk and say which is best? There are many similar issues when we talk about comparability of WSD systems most of which find common ground in the different ways in which they are build. In the following sections we look at some of those dissimilarities of WSD systems and as well at the ways in which they can be handled so that actual comparison between systems can be made possible. This is relatively important for our work as well, since the system that we designed needs to be compared to others of its kind. 3.1 Differences between WSD Systems Differences in WSD systems may arise at each decision point of the construction of the system. For example, let us recall again Figure 2.1 on page 20. On each of the steps we take where a choice between different options can be made (i.e. data collection, data preprocessing, algorithm selection) we determine the setting of the system out of a huge pool of altered settings. For instance let us consider the situation in which we have to choose the data collection that we are going to use. It makes a big difference if we pick up a corpus from a technical or highly specific domain or a corpus consisting of general texts with a wider domain coverage. Domain specific texts use only a subset of the senses of the words in them, namely the senses that are relevant to the given domain, whereas general texts cover a bigger range of the senses of the words that they consist of. Ide and Véronis (1998b) report that in the Wall Street Journal corpus (which is used extremely often in word sense disambiguation tasks) certain senses of words that are usually used for testing such as line are completely absent. Hence, the employment of different corpora in different WSD systems (especially corpora on entirely dissimilar domains 32

A Machine Learning Approach for Automatic Road Extraction - asprs
Selective Sampling for Example-based Word Sense Disambiguation
Word sense disambiguation with pattern learning and automatic ...
Word Sense Disambiguation Using Automatically Acquired Verbal ...
Using Machine Learning Algorithms for Word Sense Disambiguation ...
Word Sense Disambiguation The problem of WSD - PEOPLE
Performance Metrics for Word Sense Disambiguation
Word Sense Disambiguation - cs547pa1
word sense disambiguation and recognizing textual entailment with ...
MRD-based Word Sense Disambiguation - the Association for ...
Word Sense Disambiguation Using Selectional Restriction -
Using unsupervised word sense disambiguation to ... - INESC-ID
Using Lexicon Definitions and Internet to Disambiguate Word Senses
Word Sense Disambiguation is Fundamentally Multidimensional
A Comparative Evaluation of Word Sense Disambiguation Algorithms
KU: Word Sense Disambiguation by Substitution - Deniz Yuret's ...
Semi-supervised Word Sense Disambiguation ... - ResearchGate
Using Meaning Aspects for Word Sense Disambiguation
Word Sense Disambiguation: An Empirical Survey - International ...
Word-Sense Disambiguation for Machine Translation
Towards Word Sense Disambiguation of Polish - Proceedings of the ...
Word Sense Disambiguation Using Association Rules: A Survey
Unsupervised learning of word sense disambiguation rules ... - CLAIR
Similarity-based Word Sense Disambiguation
Word Sense Disambiguation with Pictures - CLAIR
Word Sense Disambiguation with Pictures - CiteSeerX