4 years ago

Automatic Extraction of Examples for Word Sense Disambiguation

Automatic Extraction of Examples for Word Sense Disambiguation


CHAPTER 6. AUTOMATIC EXTRACTION OF EXAMPLES FOR WSD 71 The automatically annotated data - advantages: • Used selectively according to various criteria (which we discussed in our work) can be ex- tremely valuable and employed for improvement of supervised WSD systems. • Resources can be created with considerably small effort. • Possibility to extract data for all languages that can provide corpora. • The granularity of senses can be controlled by the choice of corpora. • Can provide a big amount of examples. All those advantages and disadvantages lead to the conclusion, that automatically vs. man- ually annotated data compete mostly in respect to the invested effort in creation of the data vs. the performance of the final system. However, in our work we showed that automatically anno- tated data can be used with several different purposes, which can achieve good results and thus be considered important. In order to visualize better the difference between automatically annotated data and the manually prepared one we looked at the system performance on three different randomly chosen words (one noun, one verb and one adjective) from the lexical sample. We started with a training set consisting of only 10 randomly chosen instances and gradually added sets of new 10 examples and observed the results. The curves can be seen in Figure 7.1, Figure 7.2 and Figure 7.3 on page 93 and 95. What can be seen on the graphs is that gradual addition of instances generally lead to an increase in accuracy. Even though that the curves for the automatically annotated data are so far below the manually annotated one, which can be as well seen in the poor performance of our unsupervised experiment (see Table 6.9) we already showed, that this performance can be easily improved (refer to Table 6.12). Moreover, the constant increase of corpora ensures the fact that more instances can be easily extracted. But, of course, how many are enough will still stay an open question in the field. However, if we recall the interesting and extreme case of solid or as well other words as lose, talk and treat we see that the big number of examples does not always lead to good results. This is so, because as we noted the quality of those examples is, too, exceptionally important.

CHAPTER 6. AUTOMATIC EXTRACTION OF EXAMPLES FOR WSD 72 6.9 Evaluation The following section is a summary of the results that our system achieved on the multiple experiments, which we conducted (see Section 6.8). Additionally we report the total system performance according to the measures discussed in Section 4.1. To begin with let us consider the four different experiments, which we completed. The difference between them was in the altered training set consisting of (the following sets are referred to further in the section with their numbers below): 1. Only Automatically Annotated Data 2. Filtered Automatically Annotated 3. Only Manually Annotated 4. Manually Annotated Data plus Filtered Automatically Annotated From Table 6.15 we can see that our system achieves best results with training set 4. Set all features feature selection forward backward best perf. coarse fine coarse fine coarse fine coarse fine P R P R P R P R P R P R P R P R 1 45.4 45.4 35.5 35.5 54.7 54.7 46.2 46.2 53.2 53.2 44.2 44.2 56.2 56.2 47.5 47.5 2 54.7 54.7 47.0 47.0 65.8 65.8 59.2 59.2 63.3 63.3 56.8 56.8 66.7 66.7 60.2 60.2 3 70.7 70.7 65.0 65.0 78.7 78.7 74.3 74.3 77.8 77.8 73.3 73.3 79.3 79.3 75.1 75.1 4 70.5 70.5 64.9 64.9 78.7 78.7 74.0 74.0 77.6 77.6 73.0 73.0 79.4 79.4 75.0 75.0 Table 6.15: Summary of the results for all experiments. We always showed the accuracy of the system in terms of precision and recall, however in Sec- tion 4.1 we mentioned as well their harmonic average (used to represent the total performance of a system) - the F-score. The F-score will also allow us to compare the figures with the upper and lower bound for the experiment. Thus, in Table 6.16 we show the computed F-scores for the four experiments we conducted compared to the upper (IAA) and lower (MFS heuristic) bound for the English lexical sample task in the Senseval-3 competition as reported by (Mihalcea et al., 2004a).

A Machine Learning Approach for Automatic Road Extraction - asprs
Selective Sampling for Example-based Word Sense Disambiguation
Word sense disambiguation with pattern learning and automatic ...
Word Sense Disambiguation Using Automatically Acquired Verbal ...
Using Machine Learning Algorithms for Word Sense Disambiguation ...
Word Sense Disambiguation The problem of WSD - PEOPLE
Performance Metrics for Word Sense Disambiguation
Word Sense Disambiguation - cs547pa1
Word Sense Disambiguation Using Selectional Restriction -
word sense disambiguation and recognizing textual entailment with ...
MRD-based Word Sense Disambiguation - the Association for ...
Using Lexicon Definitions and Internet to Disambiguate Word Senses
Using unsupervised word sense disambiguation to ... - INESC-ID
A Comparative Evaluation of Word Sense Disambiguation Algorithms
Semi-supervised Word Sense Disambiguation ... - ResearchGate
KU: Word Sense Disambiguation by Substitution - Deniz Yuret's ...
Word Sense Disambiguation is Fundamentally Multidimensional
Using Meaning Aspects for Word Sense Disambiguation
Word Sense Disambiguation: An Empirical Survey - International ...
Word-Sense Disambiguation for Machine Translation
Towards Word Sense Disambiguation of Polish - Proceedings of the ...
Unsupervised learning of word sense disambiguation rules ... - CLAIR
Word Sense Disambiguation Using Association Rules: A Survey
Similarity-based Word Sense Disambiguation
Word Sense Disambiguation with Pictures - CiteSeerX
Word Sense Disambiguation with Pictures - CLAIR