S. J. Jekat/G. Massey: The Puzzle of Translation Skills.

The Puzzle of Translation Skills.
Towards an Integration of E-Learning and Special Concepts of Computational Linguistics into the Training of Future Translators

Susanne J. Jekat / Gary Massey (Zurich/Winterthur)

1 Introduction

The translation process (and thus the training of future translators) is not only based upon the bilingual competence of the translator but also on his/her capacity to analyze the relations between the source text (ST) and target text (TT) in order to produce a translation which, on the one hand, is as close to the ST as possible and, on the other, meets all necessary linguistic and cultural conventions of the target-language community. Additionally, the translator must possess specialized knowledge concerning the subject or field covered by the ST itself (e.g. law, computational science, biology etc.).

Information management serving the translation process receives effective support from electronic tools. Machine Translation (MT) and Computer-Aided Translation (CAT) [1] provide valuable assistance in ensuring consistency within documents and their follow-up versions, and industrial employers prefer using translators with advanced skills in the use of such tools. The training of translators who start their studies as "Non Computer Science Subjects" (NCSS) should therefore combine training in translation and its process with training in the application of supporting tools. In the present paper we will focus mainly on the task of information management during the translation process in an attempt to verify the following hypotheses:

  1. E-learning is an appropriate medium for training translators in the use of tools during the translation process.
  2. As hard-and-fast e-learning concepts are still very much under construction, the evaluation of e-learning lessons by the students themselves serves both to support research towards reliable e-learning concepts and, by introducing students to evaluation metrics, to train them to be critical users.

In the first part of this paper, we will introduce important skills which a professional translator should possess. The second part deals with e-learning and evaluation, two concepts that are central to the discipline dealing with language and the computer, Computational Linguistics (CL), but which still have to be fully developed. E-learning not only represents a medium of training but - like evaluation - should figure as a important research area of CL itself. However, it lies beyond the scope of the present paper to cover the field comprehensively, so instead we shall restrict ourselves to discussing some selected ideas. Information management in relation to the various stages of the translation process will be treated in greater detail than CAT and MT, which have to be the subject of additional research. We shall proceed to present some empirical work demonstrating initial steps towards a realization of hypothesis a): the evaluation, by 37 students, of an industrially designed e-learning course presenting an entire CAT system and an e-learning course for translators, designed by one of the authors of this paper (Massey 2002), focussing on the translation process and information management. The latter has been evaluated in detail by 19 students. The last part of this paper presents a brief outline of future work combining Translation Studies with training in the use of specialized tools.


2 Computational skills of professional translators

Information management, MT and CAT are of growing importance for the work of today's professional translators. This is apparent from the very first stages of the translation process, when communication between the translator and client and the processing of the ST will in most cases be effected electronically. Information management then helps to classify and organize the vast amount of information that is available for the task of translation.

In addition, the growing need for translated texts in multilingual countries like Switzerland and multilingual communities like Europe is subjecting translators to increasing time pressure in their work. The use of tools here assists translators in the more effective management of their time and resources. Databases can show, for example, how many segments of a new ST may be translated automatically and help the translator to estimate the time and/or costs of the commission.

Moreover, in many cases the translation of monotone texts such as directions for use, user manuals and product documentation favour the combination of MT or CAT with human translation: recurring parts of follow-up texts are processed electronically through the use of sample-based translation (see below) while the human translator focuses on new parts of the text for which there are no corresponding translation samples in the database. Particularly in the case of very large texts, [2] CAT and MT are able to guarantee terminological and formal consistency both in relation to the original text and within texts handled by many different translators. The databases created in the course of such work help to improve sample-based types of MT, which have already been successfully implemented within CAT systems (cf. Dorna 2001) and where segments of the ST are matched with existing translations in a database. Since sample-based translation is closely bound to the expertise of translators, who constantly feed and regularly review the databases, these systems draw on only fully accurate, high-quality stored translation units. This in turn explains the reliability of their performance.

Industrial employers are already aware of the need to make use of tools in the day-to-day work of their translation services. At Corporate Language Services in Switzerland, for instance, a tailored MT system forms an integral part of the service offered by the firm (cf. Schmid 2002). Egeberg (2003) strongly advises translator training institutions not to ignore these requirements and to integrate appropriate training methods into their curricula. He has even coined new terms to reflect the new reality with which translators are confronted (Egeberg 2003: 5): the translator must evolve from a "Sprachenspezialist" (Language Specialist) into an "Übersetzungsspezialist" (Translation Specialist) with the following skills:

  1. (S)he knows how to plan and manage projects;
  2. (s)he is familiar with the various translation technologies which help her/him to translate efficiently;
  3. (s)he possesses considerable competence in the languages in question as well as in the use of the tools which make her able to produce high-quality translations.

This leads on to a further issue. As far as we know, MT/CAT tools are taught as separate modules in the majority of institutions (e.g. Kalantzi 2002: 37 for Greece, see also Section 4.1 of this paper). In our view, it is only in combination with training in translation itself (in the form of problem-based or problem-oriented learning) that students can be adequately prepared for future professional work. This view is confirmed by students from the Institute of Translation and Interpretation at the Zurich University of Applied Sciences Winterthur (ZHW). In an informal questionnaire, 14 out of 17 subjects expressed a preference for applying the tools to a concrete translation task over attending a more theoretical introduction to a given system (Jekat 2002).

A claim of this paper is that evaluation itself should be systematically integrated into translator training curricula and become a concern of Translation Studies. We therefore address this research area of CL in the following section.


3 Short State of the Art: Evaluation in Computational Linguistics

In most of the cases, evaluation of systems for Natural Language Processing (NLP) depends upon the features of the tested system and the purpose it should fulfil (cf. Jekat/Schultz 2001: 540, Jekat/Tessiore 2000). This fact is simply demonstrated if we consider the following two scenarios in which MT (representing one subset of NLP) is used:

  1. The recipient has no knowledge at all of the source language and the translation into the target language serves to facilitate a rough understanding of the text;
  2. the recipient in this second case is a professional translator who has profound knowledge of both source and target language. (S)he uses pre-translation by an MT system for analyzing how many segments of the text might be automatically translated by the system and which parts of the text would have to be subject to human translation. This analysis enables the translator to calculate the time and costs for an actual task.

In case a), the part of the evaluation that concerns the linguistic quality of the TT might be ignored beyond the degree where the TT is roughly comprehensible. In case b), terminological errors or a TT of insufficient linguistic quality may not be ignored because the calculation of time and costs for the manual (i.e. human) completion of the TT depends mainly upon these features.

The example suggests that the results of an evaluation may relate to the different needs or perspectives of the evaluators. With regard to the linguistic quality of the system's output, the user with no knowledge of the source-language will produce different rankings to those of the seasoned translator (see also Miyazawa 2002).

What is more, even if linguistic correctness in a general sense is covered by the output of the tested MT system, criteria such as TT quality within a given context (in terms of choosing the most appropriate formulations) and strong fidelity to the ST within the bounds of TT conventions, both of which represent important metrics for the evaluation of human translations, are yet to be realize within the evaluation of MT performance. As Hovy et al. (2002: 1) observe:

A proposal to measure fidelity automatically by projecting both system output and a number of ideal human translations into a vector space of words, and then measuring how far the system's translation deviates from the mean of the ideal ones, is an intriguing idea whose generality still needs to be proved.

The state of the art in CL evaluation is well characterized by Schmitz (2002) in a review of Carstensen et al. (2001), which devotes an entire chapter to the topic of evaluation (Jekat/Schultz 2001). Schmitz (2002) claims that no obvious motivation currently exists for a detailed description of evaluation in an introduction to CL:

Das Thema "Evaluation sprachverarbeitender Systeme" mag zukünftig wichtig genug sein, um ihm ein eigenes von nur sechs Kapiteln zu widmen; die durchaus lesenswerten 18 Seiten in diesem Band rechtfertigen diese Entscheidung aber nicht nur aus Gründen ästhetischer Ausgewogenheit noch nicht.

More recently, a number of approaches towards a theory of evaluation in the field of MT have been undertaken (inter alia cf. Hovy et al. 2002, Dabbadie et al. 2002, Popescu-Belis et al. 2002). On the basis of work on evaluation carried out by standardization initiatives like EAGLES (Expert Advisory Group on Language Engineering Standards) and ISO (International Organization for Standardization), Hovy et al. (2002: 4) offer a comprehensive taxonomy of criteria for the evaluation of MT, most of which can also be applied to the evaluation of CAT or NLP systems in general. The list below contains some selected items from the taxonomy of Hovy et al. (2002); for a complete list, see FEMTI 2003.

1. Specifying the context of use
2. Characteristics of the translation task


3. Characteristics of the user of the MT system


4. Input characteristics (author and text)


- Quality characteristics, sub-characteristics and attributes
- System internal characteristics


5. Linguistic resources and utilities


- System external characteristics
- Functionality


- Reliability
- Usability
- Efficiency


- Maintainability
- Portability
- Cost. (Hovy et al. 2002:4)

Hovy et al. (2002: 5) run a variety of tests on the weighting of the metrics. It emerges that evaluators are still not aware of the influence which the characteristics of the user of the MT system have on the evaluation, an influence which is already indicated by our example above.

[…] evaluators tend to favour some parts […] - especially attributes related to the quality of the output text - and to neglect some others - for instance the definition of a user profile. (Hovy et al. 2002: 5)

Another important claim by Hovy et al. (2002) is that all (or at least most) of the criteria for evaluation should meet human intuition.

The measure: must be easy to define, clear and intuitive
must correlate well with human judgements under all conditions, genres, domains, etc., [...]. (Hovy et al. 2002: 5)

At this point, one can conclude that there are various arguments to justify thorough CL and evaluation training for future translators:

  1. As translators have to work with tools for information management, MT and CAT, they should be able to decide which system or resources are adequate for a given task.
  2. Translators should know how to improve tools by constructing adequate databases (referred to as linguistic resources in the list of Hovy et al. 2002).
  3. Most of the CAT tools expressly designed for translators are commercially developed and very seldom subject to scientific evaluation research. Students of translation who receive training in CAT tools represent a group of independent experts who might assist research during their studies and at the same time improve the conditions of their future profession. In order to fulfil this task, however, they require expert knowledge in CL.

Furthermore, since researchers are principally dependent on large sets of reference data, the input provided by user evaluations can help in the development of a comprehensive set of evaluation metrics and, in a second step, may bring about an improvement of the translation tools themselves:

Several automatic measures for MT evaluation have been proposed, and computational tools to carry them on effectively are now available.[…] all of these measures make heavy use of large sets of reference data […]. Popescu-Belis et al. (2002: 17)

Popescu-Belis et al. (2002) propose building a corpus of corrected translation examinations, where the quality of a translation can be accessed through the grade awarded. In our view, this proposal points in the same direction as ours: translation students can and should contribute to the evaluation of tools. In the case of reference texts, two different aspects of tool improvement are involved: indirect support through evaluation, and direct support effected by the creation of databases which, as mentioned above, can in turn be used as resources for CAT and MT systems.

The provisional status of NLP evaluation outlined here may suggest that teaching evaluation skills may be rather problematic. We would like to argue that, if well-established conventions for the evaluation of teaching in general are used, a initial step can be made towards integrating evaluation into the curriculum without students having to be troubled by the fragmentary nature of the field as a whole. At the same time, future translators will become conscious of their role as active participants/contributors in the field of Computational Linguistics. With specific reference to students of Computer Science (Computer Science Subjects in contrast to translators as NCSS), Hahn/Vertan (2003) argue for the integration of evaluation as early as possible in hands-on lectures on MT. They claim that, as a result,

the correspondence between design decisions and evaluation is clear for all students,
evaluation of translation quality can be discussed in relation to system design alternatives. (v. Hahn/Vertan 2002: 18)

In the following section, we will show how evaluation procedures can be introduced to NCSS in the context of e-learning. From a methodological point of view, software evaluation represents a specialed field of CL and thus cannot be introduced directly to NCSS. In our case, the familiar procedure of teaching evaluation is used to introduce general principles of software evaluation. First, a very small study is presented as an indication that NCSS in general feel comfortable with e-learning lessons. Second, a complete e-learning course intended to train students in the various tasks of the translation process and in the parallel use of tools to manage this process is presented and the evaluation of that course discussed.


4 A Case Study in E-Learning: "Tools for Translators"

Many issues in the area of e-learning still have to be explored. According to Franzen (2003) the terminology changes nearly every year and terms like web-based training, blended learning or e-learning might be used for what amounts to the same concept.

As an introduction to the field, we would like to present a small empirical study. The study was aimed at testing the success of e-learning without the subjects being aware that they were being tested. 37 translation students with basic theoretical knowledge and practical experience of one CAT system were asked to evaluate another system presented in an e-learning tutorial. The tutorial consisted of five lessons (Transit Satellite PE Tutorial, cf. Star AG 2003) which to a small degree included some interactive features. The fact that this tutorial is designed to promote the system and would not demonstrate its disadvantages can be ignored here because the goal of our study was not to test features of the system but the success of the tutorial in imparting central messages. In particular, the following questions were addressed:

  1. Do subjects feel able to evaluate the CAT system only on the basis of an e-learning tutorial (and thus have at least the feeling that they have learned something about the system)?
  2. Are there features of the system that are mentioned by many or all of the evaluation groups? A positive answer here would indicate that the core messages of the tutorial are evident to all users.

The subjects were provided with the tutorial and asked to evaluate the tool in a written report. As group-work was possible, we received 20 evaluation reports with statements addressing questions 1) and 2).

Only 9 reports explicitly referred to the reduced validity of the evaluation for having been based on an e-learning tutorial and not on the system itself. We must therefore assume that more than half of the total number of evaluation groups (11 out of 20) felt sufficiently informed by the electronic tutorial to perform their assigned task.

For question 2, although the reports contained a variety of formulations, the majority focussed on three features which were regarded as central to the system:

  1. Usability (indicated in 15 of the 20 reports, the highest degree of ranking being represented by the following remark: "It is fun to work with this system");[3]
  2. multifunctionality (13 of the 20 groups had the impression that in the system promoted by the tutorial, different functions could be applied at the same time);
  3. facilitation of the working process (15 of the 20 groups felt that work performed at separate stages of the translation process would be facilitated by the system).

It goes without saying that the whole study should be repeated more systematically and that these results neither stand as a valid evaluation of a CAT system nor are representative in any respect. What is indicated here is simply that NCSS feel comfortable with e-learning and are able to gather information about tools from e-learning sequences alone.

In the following, two more thorough studies will be described.

4.1 "Tools for Translators"

As already indicated in section 1, most practitioners and theorists agree that translation competence - defined by the PACTE Group (2000) as "the underlying system of knowledge and skills needed to be able to translate" - goes beyond the skills normally associated with bilingualism and communication in a foreign language. Translation is a unique mode of language use (Neubert 1997: 23). The ability to transfer texts implies knowledge structures that are not usually considered part of bilingualism, and though the cognitive basis of professional translation may derive from cognitive skills shared with bilinguals and foreign-language users, other cognitive structures are clearly added (Shreve 1997: 121ff.). Superficial observations of the translation process show translators mobilising very diverse, interdisciplinary skills and knowledge to accomplish their tasks: knowledge of languages, subject and real-world knowledge, research skills, cognitive qualities such as creativity, and problem-solving strategies (Presas 2000: 28). More specifically, recent research postulates that translation competence comprises a number of dynamic sub-competences, each with its own cluster of components, not all of which are believed to inform bilingual language use or be a part of non-translational communication. Thus in addition to communicative competence in both source and target language, seasoned professional translators are distinguished by the extra-linguistic, psycho-physiological, instrumental-professional, transfer and strategic competences they possess (Neubert 1997, Neubert 2000, Massey 2001, PACTE 2000; see also section 2 of this paper).

There are obvious implications for translator training, not all of which are comprehensively addressed by the institutions which train translators. In particular, many curricula fail to offer a systematic approach to the development of instrumental-professional competence (which comprises the knowledge and skills related to the tools of the translator's trade and to the translation profession as a whole), despite the fact that empirical studies (Fraser 1999, Kussmaul 1995: 22-25, 123-125) show distinct differences between professional translators and non-professionals in appropriate resource-use strategies. More seriously, underdeveloped instrumental-professional competence severely impairs the acquisition of transfer competence, recognized by PACTE (2000: 102) and Neubert (2000: 6) as the central competence that integrates all the others and as the key distinguishing province of the translator, and of strategic competence, the ability to control the interaction between all the other competences in order to accomplish translational objectives and effect an adequate transfer. It is therefore clear that instrumental-professional competence should be acquired and developed as early as possible on any translator training programme.

Such a gap in its own curriculum prompted the Degree Programme in Translation at ZHW to develop a course which would introduce first-year students to the basic resources and tools available to the translator and at the same time provide them with an overview of the translator's professional environment. Similar ideas are expressed by Kiraly (2000: 123-139).

Specifically, participants were to:

  • learn what tools are available to the translator, why these tools should be used, where and how they can be found, and how they can be employed with maximum efficiency and effectiveness;
  • acquire useful insights into the professional practices, processes and workflows of translation;
  • increase their awareness of, and sensitivity to, complex translational problems;
  • evolve, both individually and in groups, appropriate problem-solving strategies in handling text-based research tasks and assignments;
  • strengthen their ability to work in teams and reinforce their willingness and capacity to cooperate with others;
  • develop self-reliance and independence in their studies.

The last two objectives accord with strategic goals set out in ZHW's revised teaching policy aimed at reducing contact hours and increasing the proportion of time students spend on group-work and self-study. The new policy, together with wholly practical concerns about the high number of contact hours for students and a limited infrastructure, also lay behind the decision to develop an e-learning course which the students could work on from anywhere and at any time. The term e-learning is here used synonymously with web-based training (WBT) and is taken to mean, to quote William Horton's (2000: 2) eminently practical definition, "any purposeful, considered application of Web technologies to the task of educating a fellow human being".

The resultant course, "Tools for Translators: Basic Resources for the Study and Practice of Translation", which went online in November 2002 after a year-long development phase, is just that: a web-based e-learning offering. To date it has been run twice as a pilot co-requisite for first-year German-English translation courses, and has been successfully completed by 29 students.

The course comprises two main phases, each lasting a number of weeks. During the first phase, students are expected to work through four instructional modules focusing on where basic tools and resources can be found, why they should be used and how they can be most effectively employed. Module 1 introduces the overall concept of translation and the main features of the profession. It analyzes the role of the translator as an intercultural communicator and outlines the competences required to perform that role, including the skills and knowledge that combine to make up instrumental-professional competence. Module 2 is devoted to major aspects of practical translation and the translator's professional environment. It looks at the tools, procedures and skills required at various stages of the translation process, and then presents a survey of the work market, professional organisations, periodicals and other sources of support for translators. Modules 3 and 4 take a close look at research methods and resources for the translator. They inform students about what resources are available and train them in their proper use. A variety of resources are explored, ranging from printed media to internet-based tools.

To accommodate various learner types and affinities, students are given the option of working through the four modules in any order they wish, though the suggested path is the order of presentation. Each module is divided into two or three units. They include regular activities and tasks (discussion-board postings, untracked multiple choice and text-entry excercises, etc.) so that students can test their progress. The learning sequences are what Horton refers to as "classic tutorials" (2000: 136) and have a broadly cognitivist design. Peer-to-peer discussion is encouraged by monitored discussion-board activities and collaborative spaces offering synchronous and asynchronous communication facilities. An online Help file, glossary and list of sources provide additional support, with direct e-mail links to IT and teaching staff also available.

The second phase begins only once participants have completed the first. Working in groups of four or five and collaborating in online learning spaces, students apply the knowledge and skills acquired in previous modules to solve practical translation problems presented in Module 5. This module is wholly constructivist in design and comprises three text-based research assignments corresponding to what Horton calls "exploratory tutorials" (Horton 2000: 143). The scaffolding of the assignments, which include tutor-moderated discussion-board contributions, progressively moves control from the instructor to the students along lines suggested by Kiraly (2000: 45-47, 79).

As already mentioned, the decision to offer "Tools for Translators" as an e-learning course was initially taken for practical reasons, as means of sidestepping serious timetabling and infrastructural constraints, and in order to accommodate certain strategic goals of ZHW's reformed teaching policy. Yet there are also clear didactic advantages to designing a course on instrumental-professional competence as an e-learning offering.

The first concerns the rapid and enduring changes that have taken place in the way translators work in the information age. As Austermühl (2000: 1) observes, the main task of translation - transferring technical and cultural information - can only now be achieved through the use of extensive knowledge bases and electronic tools (a view confirmed by Egeberg 2003; see section 2 of this paper). An electronically-delivered course which directly integrates such tools and requires learners to use them as they study provides a classic instance of learning-by-doing. Referring to the necessity of introducing computer-based classrooms, Kiraly (2002: 123) puts it the other way round: "If we accept the constructivist assumption that education should realistically reflect actual practice with respect to the tools, methods and procedures of the profession with which students are becoming acquainted, then the [traditional] type of […] classroom is inherently deficient regarding several important aspects of task authenticity". Not only are translator's tools very much a part of the translation process, but large-scale collaborative translation projects involving a number of translators at remote locations are becoming increasingly common (Kiraly 2000: 124). In both cases, an e-learning provides an ideal environment for students to gain the authentic, hands-on experience they need as they work towards their future profession.

The second advantage derives from the possibilities inherent in hyperlinks. In enabling the non-linear presentation of knowledge in learning sequences, hyperlinking presents a powerful icon of non-linearity, whether embodied by the non-sequential nature of decision-making during the translation process itself (Massey 1998: 137f.) or by the dynamic, open-ended process which characterizes the acquisition of translation competence. For as Shreve (1997: 125) points out, translation competence is an endless process of building and rebuilding knowledge, evolving through exposure to a combination of training and continuous practical experience and leading to changes in the way that translators actually conceive of translation. Practice-oriented learning sequences with hyperlinked structures designed to reflect this process allow students to construct their knowledge in the same way that professional translators do. Initiates therefore do not simply acquire competences in preparation for the professional practice of translation but actually participate in an environment which simulates professional knowledge-building and decision-making structures.

Finally, the integration of a variety of Web-based media (text, animations, audio, video) not only allow the presentation of complex translation assignments, such as polymedial film-subtitling commissions, but also act as a strong motivational factor for participants. This is borne out by the results of the evaluation conducted among students at the end of the course, which is the subject of the next section.

4.2 Evaluation of "Tools for Translators"

The "Tools for Translators" course was evaluated by participating students in the final week of the semesters in which it was offered. Since it was important to receive feedback on all aspects of the course, only those who had managed to complete the course took part. In accordance with the standard practice at tertiary educational institutions in Switzerland, the evaluation took place by means of an anonymous questionnaire and a follow-up face-to-face feedback session at which the results of the evaluation were presented and central issues discussed. This had the advantage of being a format and procedure familiar to our students from the annual evaluation of other lessons at ZHW.

The student questionnaire contained 24 separate statements about the course, the accuracy of which the informants could assess on a six-point scale analogous to the Swiss grading system at secondary and tertiary institutions. The statements addressed didactic issues (clarity of structure, clarity of learning objectives, adequacy of content, activities and assignments, adequacy of graphics, quality of online moderation), peer-to-peer collaboration (usefulness, quality) and usability (navigability, delivery and quality of audiovisual elements, quality of collaborative learning spaces, quality of Help files and glossary). Students could make additional comments after each of the 24 statements and additional general comments about the course and questionnaire on the last page of the document. The questionnaire also contained five questions about the length of time each student needed to complete the five modules of the course. These were intended to test the appropriateness of the time allocated (and thus of the ECTS credits awarded) for completion of the course, a notoriously difficult parameter to gauge when e-learning sequences are being planned. 19 students returned the questionnaire, which corresponds to a return rate of 65.5%.

The questionnaires and feedback sessions showed that, overall, the course was felt to be extremely useful.[4] The most positive responses concerned its didactic aspects, with particular importance being attached to the criteria of structural clarity, quality and frequency of moderation, adequacy of content and comprehensiveness, although the large amount of information and resources presented in the course prompted a number of informants to request continued access for reference purposes. Least positive were the results for peer-to-peer collaboration, especially in Module 5. Despite clear instructions for student coordinators to manage the group work during the second phase of the course, this was not always done, which serious undermined the cohesion of the groups in question. Most groups tended simply to divide up tasks among members and work individually or in pairs, and three of the seven groups that completed the course resorted to face-to-face communication and simple e-mail collaboration. Asynchronous discussion-board communication was generally felt to be confusing and unsuitable for the sort of complex interactions required by large-scale collaborative assignments. The problems encountered during Module 5 were mainly blamed on time: although the original overall estimate of the time needed to complete the course proved accurate, too little had been allowed for this collaborative phase.

These findings demonstrate how vital the management and planning of collaborative study phases is. In particular, sufficient time must be allocated for their completion. They also show that it is not enough to apply general usability metrics to the selection of collaborative tools. Such choices can only be made by applying additional criteria relating to such specifics as the complexity and duration of particular assignments.

Above and beyond the practical uses of such feedback, what this suggests is that the most reliable evaluators of e-learning courses and tools are the e-learners themselves. It is they who provide the impulses and ideas for the continued development and improvement of e-learning offerings. Broadening the perspective, we can certainly say that e-learner evaluations are indispensable to the continued evolution of e-learning itself.


5 Outline of future work

Future translators have to acquire expert knowledge in the field of CL in general and of information management, MT and CAT tools in particular. The training of these students by means of e-learning seems to be a promising alternative to traditional lectures. Besides the well-known advantages of e-learning, including independence from temporal, spatial and infrastructural constraints such as restricted access to computerized classrooms, e-learning in the the area of information management, MT and CAT produces a double-bind effect, especially for so-called NCSS. In acquiring expertise in specific tools via e-learning, they are "forced" to work with the computer. The introduction of evaluation to the curriculum of Computational Linguistics for Translators might be questioned at first sight. As shown in section 3, evaluation in most cases depends upon the features of the tested system, with the result that there is still no all-embracing theory of evaluation (despite the many efforts being made to establish one, as shown in section 4 with reference to MT). Nevertheless, evaluation of e-learning lessons by the learners remains necessary for the isolation of useful concepts in the field. At the same time, the learner-evaluators can themselves benefit from this process by gaining increasing sensitivity to their role as users and indirect developers. The long-term improvement of special tools for translators seems to be possible only if the translators themselves, professionals and students alike, are expressly consulted. It is they who deliver the resources for MT and CAT in the form of translations, alignments or parallel texts, they who know what information is necessary for the exigent task of translation, they who actually run through the translation process. If they are systematically trained in CL and bound into the process of evaluation, professional and trainee translators are likely to emerge as the most reliable evaluators of their tools.



1 CAT covers a broader domain than MAHT (Machine-Aided Human Translation) since CAT software often includes tools that are not central to translation itself.[back]

2 E.g. the paper documentation for a modern aeroplane is very extensive and, according to Hess (2002: 116), too heavy to be transported by the aeroplane itself.[back]

3 The original German remark is presented in English translation here.[back]

4 It would be interesting to compare the performance of students who followed the "Tools for Translators" course in translation examinations with that of students who did not. At the present time, however, no basis for a valid statistical comparison exists. Although it is true that those students who failed to complete the course performed consistently worse in the German-English translation examinations, it could equally well be argued that these were the weakest candidates in the first place. Nor, given the possible variables, would a comparison of performance across various language pairs be a particularly reliable indicator.[back]



Austermühl, Frank (2001): Electronic Tools for Translators. Manchester.

Carstensen, Kai-Uwe/Ebert, Christian/Endriss, Cornelia/Jekat, Susanne/Klabunde, Ralf/ Langer, Hagen (eds.)(2001): Computerlinguistik und Sprachtechnologie. Eine Einführung. Heidelberg/Berlin. (= Spektrum Lehrbuch).

Dabbadie, Marianne et al. (2002): "A Hands-On Study of the Reliability and Coherence of Evaluation Metrics". In: King, Margaret (ed.): Proceedings of Machine Translation Evaluation: Human Evaluators meet Automated Metrics. Workshop at the LREC 2002 Conference. Las Palmas, Spain: 8-16.

Dorna, Michael (2001): "Maschinelle Übersetzung". In: Carstensen, Kai-Uwe et al. (eds.) (2001): 514-521.

Egeberg, Karsten (2003): Anforderungen an den Übersetzer seitens der Wirtschaft. Presentation given at the Forum International CIUTI-Marché du Travail, May 21, 2003, Geneva.

FEMTI (2003): "A Framework for the Evaluation of Machine Translation in Isle". Online-Document, http://www.issco.unige.ch/projects/isle/femti/, last visited: September 17, 2003.

Franzen, Maike (2003): "Vorwort". Mensch und E-Learning. Beiträge zur Didaktik und darüber hinaus. Tagungsband zu Web-Based Training 2003. Fachhochschule Solothurn.

Fraser, Janet (1999): "The Translator and the Word: The Pros and Cons of Dictionaries in Translation". In: Anderman, Gunilla/Rogers, Margaret (eds): Word, Text, Translation. Liber Amicorum for Peter Newmark. Clevedon: 25-34.

v. Hahn, Walther/Vertan, Cristina (2003): "Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments". In: Teaching Machine Translation. Proceedings of the 6th EAMT Workshop, November 14-15, 2002. Manchester: 11-18.
See also http://mull.ccl.umist.ac.uk/staff/harold/teachingMT/, last visited: Dezember 10, 2003.

Hess, Michael (2002): "Einführung in die Computerlinguistik I". Onlineskript, University of Zurich, http://www.ifi.unizh.ch/CL/hess/classes/ecl1/ecl1.0.l.pdf, last visited: September 17, 2003.

Horton, William (2000): Designing Web-Based Training: How to teach anyone anything anywhere anytime. New York.

Hovy, Eduard/King, Maghi/Popescu-Belis, Andrei (2002): "An Introduction to MT Evaluation". In: King, Margaret (ed.): Proceedings of Machine Translation Evaluation: Human Evaluators meet Automated Metrics. Workshop at the LREC 2002 Conference. Las Palmas, Spain: 1-7.

Jekat, Susanne J. (2002): Wunschliste Sprachtechnologie. Unpublished Paper, Zürcher Hochschule Winterthur.

Jekat, Susanne J./Schultz, Tanja (2001): "Evaluation sprachverarbeitender Systeme". In: Carstensen, Kai-Uwe et al. (eds) (2001): 523-540.

Jekat, Susanne J./Tessiore, Lorenzo (2000): "End-to-End Evaluation of Machine Interpretation Systems: A Graphical Evaluation tool". In: Gavrilidou, Maria et al. (eds.): Proceedings of the Second International Conference on Language Resources and Evaluation (LREC-2000), 31 May - 2 June 2000, Athens, Greece. Paris: no pages.

also in: Working Papers in Multilingualism Series B 2000, No.4.

Kiraly, Don (2000): A Social Constructivist Approach To Translator Education: Empowerment From Theory To Practice. Manchester.

Kussmaul, Paul (1995): Training The Translator. Amsterdam/Philadelphia.

Massey, Gary (1998): "Some Aspects of Computer-Based Translator Training". In: Lee-Jahnke, Hannelore (ed.): Equivalences 97: Die Akten. Computerwerkzeuge am Übersetzer-Arbeitsplatz: Theorie und Praxis. Bern: 137-144.

Massey, Gary (2001): Where Does Working Language End and Translation Begin?. Unpublished paper delivered at the Language Works Conference, September 28-29, 2001, Winterthur.

Massey, Gary (2002): Tools for Translators: Basic Resources for the Study and Practice of Translation. CD-ROM, Zürcher Hochschule Winterthur, funded by OPET/BBT under grant no. 2002-13, Creativetools@UAS.

Miyazawa, Orie (2002): "MT training of business people and translators". In: Teaching Machine Translation. Proceedings of the 6th EAMT Workshop, November 14-15, 2002, Manchester: 7-10.
See also http://mull.ccl.umist.ac.uk/staff/harold/teachingMT/, last visited: Dezember 10, 2003.

Neubert, Albrecht (1997): "Postulates For A Theory Of Translatio". In: Danks, Joseph H. et. al. (eds): Cognitive Processes In Translation And Interpreting. Thousand Oaks: 1-24.

Neubert, Albrecht (2000): "Competence In Language, Languages, And In Translation". In: Schäffner, Christina/Adab, Beverly (eds): Developing Translation Competence. Amsterdam/Philadelphia: 3-18.

PACTE (2000): "Acquiring Translation Competence: Hypotheses And Methodological Problems Of A Research Project". In: Beeby, Allison et al. (eds): Investigating Translation: Selected Papers From The 4th International Congress On Translation. Barcelona, 1998. Amsterdam/Philadelphia: 99-106.

Presas, Marisa (2000): "Bilingual Competence And Translation Competence". In: Schäffner, Christina/Adab, Beverly (eds): Developing Translation Competence. Amsterdam/ Philadelphia: 19-31.

Popescu-Belis, Andrei/King, Maghi/Benantar, Houcine (2002): "Towards a corpus of corrected human translations". In: King, Margaret (ed.): Proceedings of Machine Translation Evaluation: Human Evaluators meet Automated Metric. Workshop at the LREC 2002 Conference. Las Palmas, Spain: 17-21.

Schmid, Beat (2002): "661 389 Wörter pro Stunde. Ein Zentrum für maschinelles Übersetzen". Neue Zürcher Zeitung, 8. November 2002.

Schmitz, Ulrich (2002): Online-Review of Carstensen et al. (eds.)(2001). University of Essen. http://www.linse.uni-essen.de/rezensionen/buecher/computerlinguistik.htm, last visited: February 6, 2003.

Shreve, Gregory M. (1997): "Cognition And The Evolution Of Translation Competence". In: Danks, Joseph H. et al. (eds): Cognitive Processes In Translation And Interpreting. Thousand Oaks: 120-136.

Star AG (2003): Language technology & solutions. Online-Document http://www.starsolutions.net/html/eng/home/index.html, last visited: September 17, 2003.