19/10/2022

Is machine translation a form of “augmentation”? An interview with Dr. Sharon O’Brien

Is machine translation a form of “augmentation”? An interview with Dr. Sharon O’Brien

The first ever NeTTT conference was held in July this year, bringing together academics and industry professionals to share their knowledge and experience in the field of translation.

The first day began with a keynote speech by leading translation scholar Dr. Sharon O’Brien, Professor of Translation Studies at Dublin City University, titled “Augmented Translation: New Trend, Future Trend, or Just Trendy?”

In this post, we catch up with Dr. O’Brien for a deeper dive into the idea of augmentation, and what it means for machine translation in particular.

Can you give us a quick explanation of what your speech was about?

I wanted to explore the concept of “augmented translation” that has started to gain some traction in the industry, and to ask: what does this mean exactly? Is it just a new buzzword, or is it something different? And how does its current usage in the translation industry match with the research on augmentation and implementation in other sectors?

What exactly do you mean by “augmentation”, and how can you say that translation is already an augmented activity?

One definition of “augmentation” comes from a seminal paper written by Douglas Engelbart way back in 1962. He defined augmentation as:

“increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble.” (1962: Para 1a1).

So, augmentation is the use of tools and technology to assist us humans in solving complex problems.

Translation is sometimes seen as a “problem-solving” task, so use of technology to assist with more rapid comprehension and better solutions seems to fit this definition of augmentation well. In my talk, I argued that, with this definition in mind, translation has been augmented for years through the use of translation tools if you compare it with the earlier context of very little or no tool support.

You noted that among the technologies involved in translation, machine translation seems to be a wild card. Why do you think this is the case?

Again referring to Engelbart’s description, I think that we can easily accept that other tools (translation memory, term management etc.) clearly contribute to faster and better comprehension—most of the time and assuming they are well managed.

However, it is not yet widely accepted that MT can enable this, given that MT can produce erroneous or poorly formulated output which would slow down or hamper comprehension. MT is also not available for all languages and all domains, which is another limitation compared to other translation tools.

At present, to what extent do you believe MT might be classified as an augmented activity under translation, and why? What do you think it would take for MT to be considered fully augmented as other technologies?

I think we need to rephrase this question. MT, or technology in general, is not an augmented activity. The augmentation refers to human cognition and how technology can enable our limited cognitive abilities to be augmented.

So, the question is: To what extent can MT augment human cognition and our ability to translate?

This is a complex question because it depends on so many factors: Whose cognitive abilities are being augmented? What are their existing translation abilities? How well does the MT system do with the particular language pair, domain and context we have in mind and so on…?

I think that MT could certainly contribute to augmentation in certain circumstances. But, to test this, we would need to carry out experiments that implement the main stages of augmentation which are: monitoring of cognitive states and then applying relevant mitigation strategies.

The first step involves using sensors to detect when there is a cognitive challenge (e.g. lack of comprehension). Sensors such as ECG, EEG, eye tracking and pulse monitors are typically used in experimental conditions. Mitigations can be, for example, adaptation of the visual presentation, adaptation of the timing of a task, and switching levels of automation.

I’m currently exploring what these might mean in terms of translation, so hopefully there’s more to come on that soon…

Do you think we’re close to developing the right conditions for this anytime soon?

Mentioning EEG, ECG and so on as the sensors required to detect cognitive states would probably make eyes roll. We can’t deploy those kinds of sensors in the normal world of work.

However, I believe that other sensors are already built into our regular work environments, for example, cameras for eye tracking and pulse monitors, the latter of which many of us wear in our smart watches or fitness devices already. So, some of the technology required to tackle augmentation is there.

The problem is that they may not be very accurate, which could lead to annoying behavior by the supporting technology. It’s known that it’s very challenging to tune these systems and that they would ideally be personalized so I think we have some way to go before we have implementable systems.

This type of “monitoring” also introduces some very thorny ethical questions, which will need to be very seriously considered too.

 

This article is the first in a series that takes a deeper look at the research presented at the 2022 NeTTT conference. You can find the rest here:

MT is not the future but the now: Highlights from the NeTTT conference (Day 1)
Context is key in MT: Highlights from the NeTTT conference (Day 2)
Towards better MT: Highlights from the NeTTT conference (Day 3)