Use of machine translation tools exposes already vulnerable asylum seekers to even more risks

Use of machine translation tools exposes already vulnerable asylum seekers to even more risks
The use of and reliance on machine translation tools in asylum seeking procedures has become increasingly common amongst government contractors and organisations working with refugees and migrants. This Guardian article highlights many of the issues documented by Respond Crisis Translation, a network of people who provide urgent interpretation services for migrants and refugees. The problems with machine translation tools occur throughout the asylum process, from border stations to detention centers to immigration courts.

An Afro-Indigenous man from Brazil who was seeking asylum in the US spent six months in the US Immigration and Customs Enforcement (ICE) detention unable to meaningfully communicate with anyone. The machine translations tools used by the US immigration system did not understand his regional accent or dialect. His asylum application has errors as the city he previously lived in, Belo Horizonte, was translated literally to “beautiful horizon”. Another asylum seeker’s application was also rejected as the machine translation tool interpreted an “I” in their application as “we,” making it seem as if the application was for more than one person. In another case, a woman who sought asylum due to domestic abuse had described her abuser as “mi jefe”, a term colloquially used to mean “father”. The translation service translated it literally to “my boss”. These asylum applications were denied. These small language technicalities are often weaponised by governments to justify rejecting asylum claims and to deport someone. The use of machine translation tools in place of human translators has continuously failed the most vulnerable, and emergency translation organisations like Respond Crisis Translation have stressed that these AI tools should never be used in high stakes situations. Researchers have also said that these tools are inherently and systematically flawed as they reflect and perpetuate existing biases, and global power and economic imbalances. For example, the English language has the most data available to be fed into AI systems because of its colonial and imperial history. Other languages, regardless of how widely spoken, have much fewer resources, as many of these large language models are developed in the West, where English is the default language. Machine translation systems are unable to capture the myriad of cultural nuances and context, and thus continue to perpetuate and prioritise a Western worldview. More importantly, however, while improving training datasets can be a small fix, Yaseen, a translator on the Afghan language team has said, “data is still data and a human is a human”. No machine translation tool or automated system can or should replace the empathy, emotions and feelings in interacting with people who are vulnerable and traumatised. See: Lost in AI translation: growing reliance on language apps jeopardizes some asylum applications at the Guardian. Photo by Barbara Zandoval on Unsplash.

https://racismandtechnology.center/2023/09/29/use-of-machine-translation-tools-exposes-already-vulnerable-asylum-seekers-to-even-more-risks/