A pivot language, sometimes also called a bridge language, is an artificial or natural language used as an intermediary language for translation between many different languages – to translate between any pair of languages A and B, one translates A to the pivot language P, then from P to B. Using a pivot language avoids the combinatorial explosion of having translators across every combination of the supported languages, as the number of combinations of language is linear, rather than quadratic – one need only know the language A and the pivot language P, rather than needing a different translator for every possible combination of A and B. The disadvantage of a pivot language is that each step of retranslation introduces possible mistakes and ambiguities – using a pivot language involves two steps, rather than one. For example, when Hernán Cortés communicated with Mesoamerican Indians, he spoke Spanish to Gerónimo de Aguilar, who spoke Mayan to Malintzin, who spoke Nahuatl to the locals.
Pivot coding is also a common method of translating data for computer systems. For example, the internet protocol, XML and high level languages are pivot codings of computer data which are then often rendered into internal binary formats for particular computer systems. Unicode was designed to be usable as a pivot coding between various major existing character encodings, though its widespread adoption as a coding in its own right has made this usage unimportant.
Current statistical machine translation systems use parallel corpora for source and target languages to achieve their good results, but good parallel corpora are not available for all languages. A pivot language enables the bridge between two languages, to which existing parallel corpora are entirely or partially not yet at hand. Pivot translation can be problematic because of the potential lack of fidelity of the information forwarded in the use of different corpora. From the use of two bilingual corpora to set up the s-t bridge, linguistic data are inevitably lost. Rule-based machine translation helps the system rescue this information, so that the system does not rely entirely on statistics but also on structural linguistic information. Three basic techniques are used to employ pivot language in machine translation: triangulation, which focuses on phrase paralleling between source and pivot and between pivot and target ; transfer, which translates the whole sentence of the source language to a pivot language and then to the target language; and synthesis, which builds a corpus of its own for system training. The triangulation method calculates the probability of both translation correspondences and lexical weight in s-p and p-t, to try to induce a new s-t phrase table. The transfer method simply carries a straightforward translation of s into p and then another translation of p into t without using probabilistic tests. The synthetic method uses an existing corpus of s and tries to build an own synthetic corpus out of it that is used by the system to train itself. Then a bilingual s-p corpus is synthesized to enable a p-t translation. A direct comparison between triangulation and transfer methods for SMT systems has shown that triangulation achieves much better results than transfer. All three pivot language techniques enhance the performance of SMT systems. However, the synthetic technique doesn't work well with RBMT, and systems' performances are lower than expected. Hybrid SMT/RBMT systems achieve better translation quality than strict-SMT systems that rely on bad parallel corpora. The key role of RBMT systems is that they help fill the gap left in the translation process of s-p → p-t, in the sense that these parallels are included in the SMT model for s-t.