Translation Universals.pdf - ymerleksi - home
Translation Universals.pdf - ymerleksi - home
Translation Universals.pdf - ymerleksi - home
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
42 Andrew Chesterman<br />
Problem: testing. Tests of these claims sometimes produce confirmatory evidence,<br />
sometimes not. But how rigorous are the tests? If you are investigating,<br />
say, explicitation or standardization, you can usually find some evidence of it<br />
in any translation; but how meaningful is such a finding? It would be more<br />
challenging to propose and test generalizations about what is explicitated or<br />
standardized, under what circumstances, and test those. To find no evidence<br />
of explicitation or standardization would be a surprising and therefore strong<br />
result. Stronger still would be confirmation in a predictive classification test,<br />
as follows (based on a suggestion by Emma Wagner, personal communication,<br />
2001). If these universals are supposed to be distinctive features of translations,<br />
they can presumably be used to identify translations. So you could take pairs of<br />
source and target texts, and see whether an analysis of some S-universal features<br />
allows you to predict which text in each pair is the source and which the target<br />
text. For each pair you would have to do the analysis in two directions, assuming<br />
that each text in turn is source and target, to see which direction supports a<br />
given universal tendency best. Or you could take a mixed set of texts consisting<br />
of translations and non-translations and analyse them for a given T-universal<br />
feature, and use the results to predict the category assignment of each text (=<br />
translation or not). Some universals might turn out to be much more accurate<br />
predictors than others.<br />
Problem: representativeness. Since we can never study all translations, nor<br />
even all translations of a certain type, we must take a sample. The more<br />
representative the sample, the more confidence we can have that our results<br />
and claims are valid more generally. Measuring representativeness is easier if<br />
wehaveaccesstolargemachine-readablecorpora,buttherealwaysremainsa<br />
degree of doubt. Our data may still be biased in some way that we have not<br />
realized. This is often the case with non-translated texts that are selected as<br />
a reference corpus. Representativeness is an even more fundamental problem<br />
with respect to the translation part of a comparable corpus. It is not a<br />
priori obvious what we should count as corpus-valid translations in the first<br />
place: there is not only the tricky borderline with adaptations etc., but also<br />
the issue of including or excluding non-professional translations or nonnative<br />
translations, and even defining what a professional translation is (see<br />
Halverson 1998). Should we even include “bad” translations? They too are<br />
translations, of a kind.<br />
Problem: universality. Claims may be made that a given feature is universal,<br />
but sometimes the data may only warrant a subset claim, if the data are not