Every day we surf the internet, we interact with a chatbot, we open an application, it is likely that there is artificial intelligence (AI) behind that experience. May 13 is Internet Day, celebrated by technology communities to reflect on the impact and advances of this technology, but it is worth pausing to think beyond enthusiasm. Its speed has led to the exponential development of AI applications, but what does it really mean to integrate AI into our daily tools? What risks do we take when we delegate complex tasks to automated systems?
From Nuadda, as language and technology professionals, we believe that this day should not only be a celebration, but also an opportunity to question, learn and act responsibly.
The rise of generative artificial intelligence
Genai (Genai), based on models like ChatGPT, Gemini or CoPilot, has revolutionized entire sectors. These systems, known as LLM (Large Language Models), use natural language processing (NLP) to generate consistent content, translate texts, or answer questions. However, its sophistication does not mean infallibility.
As has been repeatedly demonstrated, these models can “stunned”, that is, to invent facts with total conviction, replicate social biases, consume enormous amounts of energy and lack a real understanding of the context. The recent false jurisprudence generated in the case Mata vs Avianca, or the chatbot Air Canadato Promising non-existent discounts are just examples of the legal and ethical challenges posed by the use of these technologies.
What about the translation?
In the field of translation, the impact of AI has been equally profound. We have moved from a computer-assisted human translation to human-reviewed machine translation systems, and now to LLM translation. This evolution has advantages: the systems are more accessible, allow immediate translations and facilitate transactional tasks. But there are also risks.
Some studies on the postediting task, that is, of the automatically translated content review, indicates that professional translators make an average of 11% corrections on the texts generated by AI, and that translation errors represent 80% of serious or critical errors in multilingual documents. This is not minor: a bad translation can affect a brand’s reputation, hinder legal or medical communication, or even put security at risk.
Specific risks, human decisions
What is often forgotten in the enthusiasm for automation is that AI has no responsibility. As IBM claimed in 1979: “A computer can never be held responsible. Therefore, a computer should never make a management decision.”. Despite this, more and more business processes, from human resources to customer service, depend on automatic systems that make decisions that impact real people.
Also, AI starts from scratch every time. It does not remember the style, terminology or translation history as a professional or a well-configured tool would. It mixes records, skips important details and can incorporate biases that are difficult to detect if it is not carefully reviewed.
Recent cases such as the Chatbot of the New York City Council indicating that there is no problem in serving “Cheese bitten by rats” They illustrate how absurd it can be blindly trusting an automated system. These examples, although striking, serve to remind us of the need to always have a human in command.
It is not enough to know. You have to question.
Working with AI requires a new literacy. From Nuadda we recommend some key practices to critically integrate AI:
- Do not accept the results to the letter. Check the information and contrast it with reliable sources.
- Check the data. Especially if it is sensitive or technical content.
- Anticipate errors. Hallucinations are common, even in advanced models.
- Question the biases. AI learns from the world… and from its prejudices.
- Control your communications. If the content is important, your translation is also important.
as the report remembers State of Machine Translation 2024, the perception of quality varies culturally. What is “good enough” for an American audience, for a Japanese audience can be unacceptable. Translating is not just transforming words: it is adapting meaning, tone and nuances to each audience.
AI with meaning (and with common sense)
AI is neither good nor bad by itself. It is a powerful tool that should be used with judgment. And that starts by recognizing your limitations, not just your promises. Internet day is a good opportunity for it. To question if we want content generated by AI in all areas, if it makes sense to entrust complex decisions to models without ethics, and if we are prepared to live with systems that learn from us, but do not fully understand us.
On NUADDA We work at the intersection between language, technology and people. We believe that technical knowledge should be accompanied by practical wisdom. As Miles Kington said: “Knowledge is knowing that tomato is a fruit. The wisdom is not to put it in a fruit salad.”
That AI is ubiquitous on the Internet does not mean that it should be in everything. That we celebrate it today does not mean that we should not demand more of it. Because what is truly clever is still knowing when and how to use it.