Monday, March 09, 2026

No, AI won’t save a weak translation: subject-matter knowledge still matters

1. The “AI will cover my weak spots” story

An insidiously comforting story is doing the rounds: powerful AI has done away with the gap between specialist and non‑specialist translators. In this story, subject‑matter expertise quietly moves from a must to an afterthought: you can work with inexperienced (and conveniently cheap) translators, run everything through an MTPE process, or even exclude translators altogether and ask monolingual staff to confirm that the AI’s output looks respectable enough to ship.

The promise is simple: if the machine “knows” chemistry, finance, or property law, then the only need for a professional is as a token “human in the loop”, someone to tidy up the target language, accept most suggestions, and – most importantly – take the blame if the project goes wrong. It’s a reassuring picture for buyers under pressure to cut costs. The problem is that it bears little resemblance to how translation decisions should be taken in high‑risk situations or for specialized content.

In my workflow – the one I outlined in the first article of this series – AI only comes in after a human draft is complete. I translate first in a CAT tool, using glossaries and whatever domain knowledge I’ve built up; I revise and run proper QA.

Only then do I ask an AI assistant to review terminology, register, and style, with a strict “no change if the translation is already correct” constraint. AI can suggest alternative phrasings and highlight relevant references, but it can’t decide what’s appropriate for a specific audience, legal system, or communicative purpose. That judgment still depends on subject‑matter expertise.

2. Where AI fits

The key point of my first article was simple: AI is helpful in review because it can propose alternative translations, catch possible mistranslations, or suggest options I hadn’t thought of. As a primary translator, though, it’s dangerous: it will always produce a translation, but there is no guarantee that what it offers is correct rather than an unthinking literal translation or a smooth, fluent hallucination. That’s why I have two basic rules in my prompts for the review step: if the segment is already correct, the AI should say so and not suggest changes; and whenever it suggests a change, it should explain why.

The chemistry example from Article 1 – where “third/fourth/fifth/sixth group” became “group 13/14/15/16 elements” – worked because I knew enough about the subject to question the suggestion, and could then find evidence that it made sense in context.

This article picks up that thread. If AI is going to review our work, we still need the subject‑matter knowledge to decide when its suggestions are helpful, not merely plausible.

To give a concrete example, let’s look at a term that appears all the time in Italian property documents: catasto.


3. When the “correct” term isn’t clear

3.1 From “cadastre” to “land registry”

In Italian real‑estate documents, the words catasto and catastale crop up everywhere: contracts for the purchase, sale, or lease of property, and notarial deeds that formalize those transactions. They refer to the Italian land registry system, which records properties, their location and boundaries, and various values used for tax purposes. I see these terms in work for US clients buying or renting property in Italy, who need to understand what they’re signing. For them, the question isn’t “What is the technically correct term?”, but “What wording gives me a clear picture of what this document actually does, and whether I understand it well enough to sign it?”.

3.2 AI or MT’s usual choices

AI systems and MT engines almost always go for “cadastre” or “cadastral” when they translate catasto / catastale, terms that sound satisfyingly technical and precise. Black’s Law Dictionary, for example, defines “cadastre” as “a survey and valuation of real estate […] compiled for tax purposes”. For US lay readers, the term may look impressive, but it is opaque, because it does nothing to help them understand how a “cadastral survey” fits into the process of buying an apartment.

In real‑estate contracts and notarial deeds, this leads to prose that looks precise but fails to convey its meaning. A clause mentioning a mappa catastale may emerge as “cadastral map”, a reference to rendita catastale as “cadastral income”, and a simple check at the land office as “cadastral survey”. US buyers reading such phrases understand little: they don’t know who keeps these records, what authority they carry, or how they relate to anything they’ve seen back home.

3.3 Why I usually prefer “land registry”

When I translate for US clients dealing with Italian property, my guiding principle is simple: the client must be able to understand the meaning. I therefore generally prefer “land register” or “land registry” over “cadastre”, even if the latter looks closer to the Italian term. To a US reader, “land registry” signals “official place where property records are kept”, which is close enough conceptually and easier to understand for a non‑specialist.

So visura catastale is more helpful as something like “extract from the land registry (visura catastale)” than as a bare “cadastral search”. Mappa catastale becomes “land registry map” or “land registry plan”, depending on the document. For rendita catastale, I normally use an explanatory translation: for example, “notional property income for tax purposes (rendita catastale)”, keeping the Italian term in brackets so that any later cross‑checking with Italian documents or forms is easier.

This doesn’t mean “cadastre” is always wrong. In a different context – for instance, a technical article explaining how the Italian land registry system works – I might very well keep “cadastre” in my translation.

The important distinction is audience and purpose: translations should serve their intended function in the target context rather than mechanically mirror the source. AI can tell you that catasto often maps to “cadastre”, and it can list definitions and examples. What it cannot do is weigh up whether your specific readers are better served by that term, or by a more accessible “land registry” plus a brief explanation.

That is where professional subject‑matter knowledge and judgment come into play. AI proposes strings of words; a professional understands what those words mean, and which are appropriate for the task at hand.

3.4 “But you can always add a glossary”

A fair objection is that terminological issues can be handled through glossaries or client‑specific termbases. If a client decides that catasto is always “land registry” in their English documents, an automatic system (whether AI‑ or MT‑based) can be instructed to translate it so.

That’s helpful as far as it goes, but it doesn’t solve the underlying problem: even a mandated term still has to make sense for a specific audience and in a specific situation, and someone must know when that glossary choice no longer serves that purpose. Glossaries encode past decisions; they don’t remove the need to make new ones when the context shifts.

3.5 “Private international law” or “conflict of laws”?

The same pattern appears in other fields: AI offers something that looks legally or technically correct, but only subject‑matter knowledge tells you whether it fits the system, the audience, and the purpose of the text.

Take a standard sentence that a law firm might use to describe some services it offers:

Lo studio assiste i clienti in materia di diritto internazionale privato, ad esempio per individuare la legge applicabile ai rapporti patrimoniali tra coniugi di diversa cittadinanza.

A first draft translation might be:

The firm assists clients in matters of private international law, for example, in determining the law applicable to property relations between spouses of different nationalities.

Linguistically, there’s nothing wrong with that. An MT system or a rushed human translator could easily stop there. During review, though, I might decide to recast it as:

The firm advises clients on conflict‑of‑laws matters, for example, in determining which law applies to matrimonial property regimes between spouses of different nationalities.

Here I’m not correcting a mistranslation; I’m choosing a formulation that makes more sense for the likely readership. In an academic context, “private international law” may be the natural field label, and I would keep it as is, perhaps adding “(conflict of laws)” on first mention for US‑trained lawyers. In a law‑firm profile aimed at lay clients, though, I favor the more common “conflict of laws” and make “matrimonial property regimes” explicit.

AI can give you “private international law” and even offer “conflict of laws” as an alternative. What it cannot do is decide which label fits this specific text for this specific readership. That decision rests on your knowledge of how lawyers talk about these issues and how much conceptual scaffolding your readers already have.


4. “Truth is always a defense”: when the words are fine but the content isn’t

Sometimes the problem is not the translation but what the source text is claiming. Imagine a global corporate policy drafted at US headquarters:

Under our global communications policy, truth is always a complete defense to any defamation claim.

A correct translation might be something like:

Secondo la nostra politica globale sulle comunicazioni, la verità è sempre una difesa completa contro qualsiasi causa per diffamazione.

The translation is correct and mirrors the English source text. An inexperienced translator might sign off on it; even an experienced one, under time pressure, might not give it a thought. Yet anyone who knows a bit of Italian defamation law will feel uneasy. In the Italian legal system, even truthful statements can be defamatory if they unjustifiably harm a person’s honor or reputation. The neat “truth is always a complete defense” formula is simply incorrect in Italian law.

In a case like this, I would leave the Italian sentence as it is, because it reflects what the policy says, but I would flag it to the client. My note might say that this rule is modeled on US law and may need to be reviewed or rewritten for Italy before the policy is rolled out locally. No AI system will ever volunteer that comment: it does not know where one legal system stops and another begins.

This is another way subject‑matter expertise shows up in our work. Sometimes it means choosing “conflict of laws” over “private international law”. Sometimes it also means not changing a single word of the translation, but warning the client that the underlying claim doesn’t travel as well as they may think.


5. Domain expertise, and how to build it

“Subject‑matter expertise” might sound like a grand label, something only actual experts like lawyers or engineers can claim, but it’s more modest and more attainable than that. It’s the cumulative result of studying, gaining experience, paying attention to how people in a given field communicate among themselves, and noticing where systems and expectations differ across languages.

Part of it is exposure: reading real‑world documents rather than only parallel texts and glossaries. For real‑estate work, that might mean notarial deed templates, tax guides, and English‑language information sites for foreign buyers. For legal work, it might mean court forms, statutes, bar association materials, and law‑firm blogs in both languages. Over time, you build a mental model of who the actors are, what institutions exist, which text types are standard, and how people phrase things when they’re not writing for translators.

Another part is building your own reference system. I like to keep a corpus for each niche: links to the Italian penal, civil, and procedural codes, to their US counterparts, PDFs of relevant articles, copies of online forms, client‑side FAQs, and a log of past decisions for tricky terms. My reasoning for the choice between “cadastre” and “land registry” goes into that log. So does the decision to use “conflict‑of‑laws matters” on a particular law‑firm website, or to flag a “truth is always a defense” policy line for legal review instead of quietly normalizing it.

Expertise shows in how you use AI, rather than whether you use it at all. If you already know what the catasto is, AI can help you compare “land registry” against “cadastre” in actual usage, or suggest alternative ways of phrasing an explanation for US buyers. If you already understand the difference between libel and slander in US law and Italian diffamazione, AI can help you find references and alternative wordings for a translator’s note. Without that base, the same tool is just as likely to lure you into mistakes, because it will give you authoritative‑sounding answers that fit the wrong system.

Subject‑matter knowledge is not an optional extra that AI makes less important. It is what allows you to decide when the machine’s output is good enough, when it needs adapting for your readers, and when the problem lies not in the words but in the underlying assumptions. That is the judgment clients should expect when they hire a professional translator, and it is the part of the job no model can take over for you.


6. A practical subject‑matter checklist

I keep a simple checklist in mind when I translate with AI in the loop; at the top of that list is always to start with the purpose of the translation.[1]

1. What is the purpose of this translation?
What job does this text need to do in the target context – help a US buyer understand a contract, inform specialists, or support an academic article? The same Italian text may need to be translated differently in English for each purpose, as long as you don’t distort its legal effect and underlying meaning. An academic article on Italian property law might prefer “cadastre”, while the translation of a deed for a US buyer won’t.

2. Who will actually read it?
Are we translating for lay US property buyers, Italian notaries, in‑house counsel, or EU policy staff? Your choice between a specific technical term and an explanatory phrase depends on this. A term that works for a specialist audience may be pure noise for someone who just wants to know what happens if the roof leaks.

3. What does your audience need to do with it?
Do they need to sign it, follow it, file it, or simply be informed? Someone who has to sign a notarial deed needs more than a term that looks technically correct; they need a fair chance of understanding what they’re committing to. For a background briefing, on the other hand, a more technical label might be perfectly acceptable if it’s explained once.

4. Would this term mean anything to them?
If you showed your reader the term on its own, would they recognize it and know what it involves? If not, you need a better solution, possibly with the original term in brackets on first occurrence. That AI can define the word doesn’t mean your readers will understand it when they see it in a contract.

5. Is there a real‑world equivalent in the target system?
Does your audience’s country or legal system have an institution that plays a similar role, even if the match isn’t perfect? In many English‑language contexts, “land registry” is closer to what a US buyer expects than “cadastre”, even if the Italian catasto is not identical to any US office. The goal is a responsible approximation, not a one‑to‑one mirage.

6. How much risk is involved in this choice?
If a non‑specialist misunderstands this term, are we looking at mild confusion, or potential legal or financial consequences? The higher the risk, the less the translation can afford to hide behind a dictionary gloss or an AI suggestion that “sounds legal”. In regulated or high‑stakes domains, “defensible but potentially misleading or unclear” is often the worst of both worlds.

7. How does this interact with your existing terminology?
Will accepting the AI’s suggestion break consistency with your glossaries or previous work for this client or domain? Sometimes the fix is as simple as giving the AI a client glossary or termbase so that it stops inventing alternatives. But a glossary doesn’t remove the need for judgment; it just moves it upstream, to when terms are chosen and to the cases where the approved term doesn’t fit the purpose. And when a client questions an inconsistency, AI will not defend your choices for you.

8. What role do you want AI to play?
Are you asking AI to propose synonyms, retrieve definitions, check consistency, or simply sanity‑check a decision you’ve already made? If you catch yourself expecting it to decide which option fits your purpose and audience, that’s a red flag. In my workflow, once I’m clear on these questions, AI is there to challenge me and to point out blind spots, not to choose the target term on my behalf.

If you can’t answer several of these questions with reasonable confidence, the problem isn’t your review prompt or your choice of AI tool. It’s that you don’t yet understand the subject – or the translation’s purpose – well enough to delegate any part of the decision‑making. That’s not a criticism; it’s a signal about where to invest your next hours of research.


7. Where this leaves human + AI workflows

Taken together, the picture is less dramatic than the “AI has done away with the gap” story, and much more useful. AI can help us check consistency, spot alternative phrasings, and compare how terms are used across sources, but it cannot decide which solution fits a given purpose, audience, and level of risk. That part depends on subject‑matter knowledge and on the professional responsibility you can’t outsource to a model.

In the workflow I described in Article 1, what matters most is when you use AI. I translate first, making concrete decisions about terms like catasto based on who will read the text and what they need to do with it; then I let AI challenge those decisions under strict “no unnecessary changes, always explain why” constraints. Used this way, AI doesn’t level the field between strong and weak translators; it widens the gap, because the more you know about your subject, the more the tool can help you.

The next articles in this series will pick up from here and get more practical. I’ll look at how to design prompts that reflect your existing decisions and glossaries, and at how to integrate AI review into a broader toolbench that includes CAT tools, QA checks, and your own reference material. The core principle, however, will stay the same: AI can speed up parts of the process, but it cannot think through our choices or take responsibility for them. That’s still our job – and it’s where a professional translator’s value lies.


  • Notes

    [1] Vermeer and Reiss’ Skopos theory reminds us that the purpose of the translation should guide our choices, and Andrew Chesterman’s Memes of Translation is still one of the clearest accounts of how ideas like this shape everyday practice.

Thursday, March 05, 2026

Why I use AI for review, but not for translation (most of the time)

This is the opening article in a series about how I combine human translation with controlled AI review. I’m writing mainly for translators who don’t want to outsource their judgment to a black‑box tool, and for organizations that care more about process and reliability than “magic prompts”.

In this first article, I want to answer a simple question before we dive into scripts and workflows: why do I use AI almost only for review, not to produce the first draft?


My baseline: human first, tools as helpers

On a typical project, I follow a familiar workflow:

  • I translate segment by segment in a CAT tool (usually Trados or memoQ).

  • I draw on glossaries I’ve built over the years, and sometimes glossaries provided by the client.

  • I revise each segment in the CAT tool as I go.

  • At the end, I run a QA pass, usually with Xbench, to catch inconsistent translations and errors in spelling, numbers, tags, terminology, and similar issues.

AI comes in after the first human draft. I send selected segments or paragraphs to an AI assistant (Perplexity Pro) with a narrow brief: check terminology and register, suggest improvements where needed, but say “no change necessary” if the translation is already correct. I’m not asking it to invent content; I’m asking it to challenge my choices.

That distinction—AI for drafting vs. AI for review—is the point of this article.


What AI is actually doing to translation work

AI and MT have already taken over the bottom of the translation market and are chewing through the middle. Much low‑end work has either disappeared or returned as “human in the loop” post‑editing, often at rates that don’t reflect the risk.

At the same time, many translators are using the same tools to improve quality and consistency, especially on specialized material. There’s a big difference between:

  • “Let’s have AI churn out a first draft and you do a quick skim”, and

  • “You produce a solid human draft, then use AI to check terminology and style in a structured way.”

This series is about the second scenario.


Where AI actually helps in review

In one recent project, I translated Italian‑into‑English engineering and chemistry course descriptions written in the 1980s, for an application being submitted now. The source reflected older terminology; the translation had to match current usage.

One syllabus section included:

Parte descrittiva: idrogeno; metalli alcalini; elementi del terzo gruppo; elementi del quarto gruppo; elementi del quinto gruppo; elementi del sesto gruppo; alogeni; elementi di transizione; chimica organica.

My first English draft was:

Descriptive section: Hydrogen; alkali metals; third‑group elements; fourth‑group elements; fifth‑group elements; sixth‑group elements; halogens; transition elements; organic chemistry.

With a “check and only suggest changes if needed” prompt, AI proposed:

Descriptive section: Hydrogen; alkali metals; group 13 elements; group 14 elements; group 15 elements; group 16 elements; halogens; transition elements; organic chemistry.

On the face of it, that’s a bold change: terzo gruppo becomes “group 13”. If the Italian says “third group”, “group 3” is the obvious reading.

The key step was to look at the surrounding content. The same section goes on to discuss:

  • third‑group elements and the industrial production of aluminium

  • fourth‑group elements, focusing on carbon and silicon

  • fifth‑group elements, then nitrogen, phosphorus, nitric and phosphoric acids, fertilizers

  • sixth‑group elements, then oxygen, sulfur, sulfur oxides, sulfuric acid

Aligned with modern periodic‑table families, this clearly follows an older main‑group convention: aluminium family → Group 13, carbon/silicon → Group 14, nitrogen/phosphorus → Group 15, oxygen/sulfur → Group 16.

In this context, “group 13/14/15/16 elements” is the right modern phrasing in English. AI’s suggestion pointed in the correct direction, but I still had to read the syllabus, know the chemistry, and confirm the mapping against current references. The useful part was speed: Perplexity also surfaced relevant reference material, so checking the group numbers took minutes rather than a small research session.

A second example from the same materials is simpler. One course title read Automazione e Regolazione. My first draft was “Automation and Regulation”, which is literal but slightly off in engineering. When I asked AI to review the title in context (“US English, engineering university course titles”), it suggested “Automation and Control” instead and noted that in control engineering regolazione here is about automatic control systems and control theory, and “Control” is the standard term.

It’s a small change, but it makes the course title sound like something an engineer would actually write today, not a direct echo of the Italian.

Both examples show the pattern. I draft the translation, then use AI to ask: “Is this how a current English‑language textbook or course catalog would phrase it?” Sometimes the answer is “no change necessary”, which is helpful in itself. Sometimes I get a better term (“Control” instead of “Regulation”). Sometimes I get a suggestion that only holds up once you check it against the underlying discipline. In all cases, I decide what goes into the final draft; AI just helps me interrogate my own work.


Why I don’t use AI for the first draft

If AI can do all that in review, why not let it produce the draft and just “fix things up” afterwards?

I resist that for a few reasons:

Hallucinations and smooth nonsense
Modern systems can produce fluent, plausible text, but they’re not good at signalling uncertainty. In technical or academic work, that’s risky. I prefer to start from a translation whose meaning I know, rather than from a confident text that may be wrong in subtle ways.

Terminology drift and inconsistency
Over a long document, AI can drift in terminology, use different phrases for the same concept, or shift definitions. Cleaning that up after the fact is often harder than keeping terms under control while writing.

The “expert as janitor” problem
“We’ll have AI do the first draft and you just review it” usually means “take on the liability of spotting errors, but at a discount”. An AI‑first draft often has a higher error rate than a careful human draft, and responsible review takes time. It’s not quick; it’s just different work.

Control over voice and argument
In some content, structure and tone are part of the meaning. If I let a tool produce the first version, I still have to rebuild the argument, nuance, and hedging afterwards.

In short: I’d rather think through the text once as I write it, then use AI to check and refine, than spend the same or more time trying to infer what a system “meant”.


How I keep AI in its lane

A lot of this comes down to how you frame the task.

For review, my prompts usually say, in effect:

  • Focus on terminology, register, and clarity.

  • Suggest changes only where something is incorrect, unclear, or clearly suboptimal.

  • If the translation is already correct, respond with “No change necessary.”

  • If you’re uncertain about a term, say so and give alternatives.

This reduces noise: I don’t want the tool rewriting clean sentences. It also makes it clearer which suggestions are genuine corrections vs. preferences.

There are times when I “spar” with the system. It suggests something that doesn’t quite fit; I push back, adjust, and sometimes end up with a third option that’s better than either my original or its first attempt. But the direction is clear: I have the brief and the responsibility. AI is there to catch blind spots and propose options.

I’ll dig into prompt design in a later article. For now, the important point is that the prompt mirrors the workflow: human first, AI second, human last.


Practical takeaway (for colleagues and clients)

If you’re a translator, before asking “How can I get AI to translate this for me?”, try “How could AI help me review my own translation more effectively?” Start human, then use the tool to:

  • check that your terminology matches current usage in the field,

  • challenge titles, headings, and boilerplate that may have aged, and

  • reach reference material quickly when something looks like an old convention.

If you’re a client or project manager, the safer setup for serious content is still human translation plus AI‑assisted review, not AI‑first with “quick human post‑editing”. You want someone who understands both the domain and the tools, and who uses AI to support their judgment, not replace it.

In the next article, I’ll look at why subject‑matter expertise is still non‑negotiable in an AI world. Without that, AI review quickly turns into curating style instead of checking meaning.

Saturday, February 28, 2026

How I use AI to clean up OCR output

OCR tools such as ABBYY FineReader PDFAdobe Acrobat (Scan & OCR)Readiris 17, and OmniPage are widely used to convert scanned pages into editable text. They handle the heavy lifting of recognition, but their output almost always contains familiar OCR artefacts: split words at line breaks, strange spacing, and character substitutions like “l” for “1” or “rn” for “m.”


AI tools, like Perplexity ProChatGPT and Claude, can step in as a targeted post‑processing layer. Instead of the time-consuming process of checking each suspicious character in your OCR tool or word processor, you let the AI work at sentence and paragraph level, using context to decide what the text was meant to say while keeping the content intact.

Suggested workflow

  1. Run OCR and export text
    Use your preferred OCR tool to recognize the document and export the result as plain text, Markdown, or Word. Avoid exporting to formats that hide extra formatting (e.g. heavily styled PDFs), because you want clean, editable text for the AI.

  2. Send chunks to the AI with a focused prompt
    Work in sections (for example, 5–10 pages at a time). Prompt example:
    “This text comes from OCR and contains typical OCR artefacts. Correct broken words, spacing, punctuation, and capitalization, but preserve all content, line breaks, and headings as much as possible. Do not summarize or omit anything.”

  3. Review against the scan in a side‑by‑side view
    Open the scanned PDF/image on one side of your screen and the AI‑corrected text on the other. Check critical areas: headings, numbers, tables, and domain‑specific terms (e.g. drug names, legal references, codes). Correct any terminology the AI “normalized” incorrectly.

  4. Use a diff tool between raw OCR and AI output
    Run a text diff between the raw OCR file and the AI‑corrected version. This helps you see exactly what changed, confirm that nothing was dropped, and quickly scan for any over‑confident “corrections” you don’t want.

  5. Automate for larger volumes
    For recurring projects, you can script the process: batch‑export from your OCR tool, segment the text, send each chunk to the AI via API, then reassemble the cleaned text. Human effort can then be reserved for spot‑checking and final QA instead of first‑pass cleanup.

By combining a robust OCR engine with an AI cleanup step, you move from “barely readable extraction” to text that is reliable enough for translation, further editing, or long‑term archiving, without spending hours fixing the same types of errors by hand.

Tuesday, January 27, 2026

Translation Notes: “Director”

Translating “Director” into Italian in legal, business, and financial contexts is less straightforward than it looks.

Meanings of “Director”

In English corporate practice, a director is usually a member of the board, i.e., part of the company’s governing organ. In job titles like “Finance Director” or “Marketing Director”, however, “director” often labels a senior manager heading a function, not necessarily a board member.

Italian translations

For Italian joint stock companies (“Società per Azioni”, or S.p.A.) and limited liability companies (“Società a responsabilità limitata”, or S.r.l.), the functional equivalent of a board “director” is an amministratore (or membro del consiglio di amministrazione). In this sense, director and amministratore both indicate a person who takes part in management and decision‑making at organ level.

Within the board, titles such as “managing director” or “executive director” are commonly rendered as amministratore delegato (or “AD”), who is both an amministratore (board member) and the top executive charged with day‑to‑day management. In current Italian practice, the amministratore delegato is often referred to by the English acronym “CEO”.

By contrast, direttore in today’s corporate Italian typically indicates a high‑ranking employee: direttore generale, direttore finanziario, direttore commerciale, etc. Modern legal drafting tends to reserve amministratore for members of the board and direttore for management roles within the organizational chart.

False friends and traps

The main trap is translating every “director” as direttore. When “director” refers to a board member, the correct Italian is amministratore or membro del consiglio di amministrazione, not direttore. A direttore generale is usually a top manager reporting to the board, not one of its members.

Rule of thumb

If the person sits on the board → amministratore / membro del consiglio di amministrazione.
If it’s a functional job title → direttore (+ area: finanziario, marketing, ecc.).

Saturday, March 29, 2025

Italian Citizenship Law Update: Stricter Rules for Descendants Abroad

The new rules will impact individuals applying or planning to apply for Italian citizenship through their Italian ancestry (known as "ius sanguinis"), as well as professionals (like translators) who assist with these applications.
The decree-law approved today stipulates that Italian descendants born abroad will automatically be citizens for only two generations: those with at least one parent or grandparent born in Italy will be citizens from birth. In the second phase, a bill also approved today introduces further and more substantial changes to the citizenship law. Notably, citizens born and residing abroad must maintain real ties with our country over time, exercising their citizenship rights and duties at least once every twenty-five years.

The reform is completed by a second bill that revises the procedures for recognizing citizenship. Going forward, residents abroad will no longer apply through consulates but will instead use a special centralized office at the Ministry of Foreign Affairs. A transition period of about one year is planned for organizing this office. The goal is to streamline procedures, achieving clear economies of scale. Consulates will focus on serving existing citizens rather than processing new citizenship applications. Additionally, the provision includes measures to enhance and modernize service delivery: legalizations, civil registry services, passports, and travel identity cards. Organizational measures are also planned to ensure the Ministry of Foreign Affairs increasingly serves citizens and businesses.

From a press release published on March 28, 2025 by the Italian Ministry of Foreign Affairs.

Wednesday, November 27, 2024

Try Perplexity Pro free for one month

I have a couple of discount codes to try Perplexity AI free for one month. I’ll give them to the first two persons who’ll ask for them by sending me an email from this blog (see on the right) or in Linkedin.

Saturday, November 23, 2024

Perplexity AI, Your Translation Research, Terminology and Review Assistant

I’ve just added to the “Other Presentations” page of this blog the presentation I recently gave on Perplexity AI at the AI in Translation Summit

What is Perplexity AI, and how can it help us?

A Perplexity query

Perplexity AI is an innovative search tool combining web searching with language models for concise, contextual answers.

Unlike traditional search engines, it gives conversational answers with citations, and, unlike AI tools like ChatGPT, it offers real-time web searching, with several advantages for translators, such as access to current information on specialized topics, helping us understand the context of our projects. It also helps in terminology research for domain-specific terms.

Perplexity can verify short translated segments by cross-referencing our translations against its search results, to identify potential errors, and the paid version has enhanced privacy features, allowing secure upload of confidential documents.

Perplexity helps gather contextual information and find references for our translations.
 
It provides us with detailed overviews of complex topics.
 
For example, if we are translating a legal document about international child custody laws, we can ask Perplexity for a summary with a query like "Summarize the differences between child custody laws in Italy and the US", and the system compiles a concise summary from various sources. 

Perplexity incorporates contextual information in its responses; this means that we can ask follow-up questions to dig deeper without repeating the background. For example, we might inquire what weight is given to children's wishes in custody decisions.
 
Perplexity provides citations, which allow us to verify the sources it finds for us; but remember that we should always cross-check crucial information. The best use for this system is as a starting point to guide our research, not as the sole source of information.
 
By leveraging Perplexity's real-time search and summarization functions, we can find better information on complex subjects, speeding up our background research. 

While Perplexity is a powerful tool, it's important to remember that, like all AI models, it may occasionally produce errors and hallucinations: it’s a helpful assistant, not a replacement for our expertise.