21/12/2022

Machine translation for literary texts? An interview with Damien Hansen

Machine translation for literary texts? An interview with Damien Hansen

When we talk about translation as an art, the first thing that comes to mind is often literary translation. Literary translation has a long history dating back as far as the first surviving piece of literature, the epic of Gilgamesh. It is also the domain of translation most resistant to the development of machine translation, with its higher demand for human creativity and imagination.

So naturally, when there’s any progress in this domain, we simply have to take notice. From the 2022 NeTTT conference, we have Damien Hansen and Emmanuelle Esperança-Rodier’s presentation, “Human-Adapted MT for Literary Texts: Reality or Fantasy?”. In this article, Damien has kindly answered a number of questions about their intriguing research.

Can you give us a quick explanation of what your speech was about?

The aim of this talk was to present a custom machine translation (MT) tool that we developed specifically for the literary domain as part of my PhD project, and more specifically the findings of the evaluation process that I carried out with my colleague, Emmanuelle Esperança-Rodier. Our objective, however, was not just to build a system that is adapted to literary texts, but that is also tailored to the style of an individual translator. And we are quite happy with the results that we presented.

The timing was ideal, as there is currently a growing branch of research at the crossroads between literary translation and new technologies, as manifested during an event that we recently held at the University of Liège, the CALT Conference before that, or the workshop that was organized prior to the NeTTT conference. So, the NeTTT Conference was also a good opportunity for us to engage with the scholarly community and suggest a new approach to machine translation, more closely centered on the human aspects of the human-machine interaction.

How do you define “human-adapted MT”?

Besides the need for a catchy and hopefully not too tacky title, the reason behind this “human-adapted” appellation is tied to the views and motivation underlying our work; that is, a development and fine-tuning task that goes beyond just domain adaptation and tries to place the human at the very center of the process.

The immediate and perhaps most obvious explanation has to do with developing a system that is able to learn and—to some degree—mirror a particular translator's idiosyncratic style. To be perfectly honest, this outcome was not the core of our initial research question, but rather the unintended though fortunate consequence of a lack of data. An outcome which was increasingly apparent as we were progressing and dovetailed nicely with our past research on computer-assisted literary translation (CALT), however, and which quickly became the original focus of our contribution; hence the “human-adapted MT”.

Although this is not an entirely new idea, we now have evidence that it works quite well and that it is indeed possible to create custom MT tools for individual human translators. In the end, this is simply another, albeit more specific, domain adaptation task, resulting in MT suggestions that are more relevant, unique and useful for professionals.

So it would seem that we have everything to gain from further adapting our systems, right? Not only in terms of quality, but perhaps creativity as well.

On the other hand, this point of view entails a slightly different stance by which MT shifts from a human-assisted machine translation model to a more computer-assisted human translation paradigm. This is also in line with the existing and growing literature on reception and user studies regarding translation technologies, which has shown that development choices, user-friendly interfaces and personalized work environments can have a significant impact on the effectiveness and acceptance of such tools.

Similarly, we are convinced that customized MT systems and additional attention to ergonomic considerations and interaction between humans and machines would benefit productivity, work conditions and the overall quality of translations. Of course, the advantages of such a system are easily conceivable as far as creative texts are concerned, but we see no reason why this could not apply to other domains as well.

What made you decide to attempt this project?

Originally, this started as a secondary study of a larger project on CALT, as I mentioned, inasmuch as we were wondering if MT could become a sort of adaptive and custom-made translation memory for professional literary translators. There was, in addition, a willingness to take an objective look at new technologies and their use when translating literature, with a view to overcoming the polarizing debates and the recurrent objections that are raised against computer tools and creative texts.

On a more personal note, this was also the ambition of a translator—that is to say, my somewhat selfish ambition—in learning more about MT, trying to reappropriate technologies for personal and creative uses, and showing that with a few conceptual and practical changes, we can do things in a way that is less confrontational and that really emphasizes the opposite yet complementary strengths of humans and machines when it comes to translation.

What was the project’s scope, and how well do you think it can be replicated?

At present, the project is only centered on one author, one translator and one saga, but this limitation to a single use case was more of a necessary focus for our experiment.

Knowing now that it can be done, we think that it could be replicated just as well for any other translator: there is in principle no reason why this should not work in other cases, so long as there is enough data to tune the system, and that translators are willing to use such technology, of course.

It just so happens that we have received inquiring and interested calls from professionals intrigued by our work, so this might give us a chance to put things to the test with works in other genres, and we would love to explore this avenue.

Of course, there are not yet any tools that would make it easy for translators to train their own systems without some understanding of machine learning and programming. There are nevertheless a few tentative efforts in this direction, so who knows... Perhaps that, in a few optimistic years, professionals might rely on the assistance of their own personally trained MT system(s).

What findings from this project would you consider the most significant?

While there is a tendency nowadays to build increasingly bigger models, with the aim of handling more and more language processing tasks, we have obtained very nice results in our work by contrarily scaling things down and focusing on more relevant quality data. As a result, we were able to show that our system had improved regarding certain aspects such as lexical diversity or literalness by comparison with publicly available systems that are trained on a lot more data.

And more importantly, we noted that it was a lot closer to the human reference. Not only in regard to simple lexical choices, but because it reproduced more abstract strategies (omission of specific types of information, heavier syntactic reorganization, etc.) that are in line with the reference and that we could say are indicative of adaptation to translator style.

Now, this is just the beginning of an exploratory work. If anything, it shows that we have to consider how to make MT less constraining and more inspiring for translators. User reception studies have been getting more and more attention, but we still have some ways to go, and we are hoping to contribute to the debate as a continuation of this project, as is the case with the numerous ethical questions that arise with the mere possibility of literary MT.

 

This article is part of a series that takes a deeper look at the research presented at the 2022 NeTTT conference. You can find the rest here:

MT is not the future but the now: Highlights from the NeTTT conference (Day 1)
Context is key in MT: Highlights from the NeTTT conference (Day 2)
Towards better MT: Highlights from the NeTTT conference (Day 3)