Bibliometric Insights into Translation Technology: A CiteSpace Analysis of Web of Science Core Collection Publications (2000-2024)

by Juan Wang
School of Foreign Studies, Yangtze University, Jingzhou, Hubei 434023, P. R. China.

10.46679/9788196780593ch05

Wang, J. (2024). Bibliometric Insights into Translation Technology: A CiteSpace Analysis of Web of Science Core Collection Publications (2000-2024). In T. Chuanmao & D. Juntao, Translating the Future: Exploring the Impact of Technology and AI on Modern Translation Studies (pp 85-136). CSMFL Publications. https://dx.doi.org/10.46679/9788196780593ch05

Abstract

In the dawn of the 21st century, the advent of innovative translation models has significantly advanced the capabilities of machine translation. This paper aims to conduct a comprehensive bibliometric analysis and visualization of the current landscape of translation technology research. Utilizing the Web of Science (WoS) core collection and employing CiteSpace 6.3.R1 as the analytical tool, this study delves into document citations, authorship patterns, institutional collaborations, and keyword co-occurrence. The findings present a detailed overview of the research from 2000 to 2024, highlighting prominent scholars and institutions, foundational literature, thematic areas, developmental trajectories, and prospective directions in the realm of translation technology.

Keywords: : Bibliometric analysis, Web of Science, Translation technology, Data visualization, CiteSpace

This chapter is a part of: Translating the Future: Exploring the Impact of Technology and AI on Modern Translation Studies

© CSMFL Publications & its authors.
Published: November 12, 2024

References

  1. Bahdanau, D., Brakel, P., Xu, K., Goyal, A., Lowe, R., Pineau, J., … & Bengio, Y. (2016). An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086.
  2. Barrachina, S., Bender, O., Casacuberta, F., Civera, J., Cubel, E., Khadivi, S., … & Vilar, J. M. (2009). Statistical approaches to computer-assisted translation. Computational Linguistics, 35(1), 3-28. https://doi.org/10.1162/coli.2008.07-055-R2-06-29
  3. BL, M. (2021). Analysis of Machine Translation Tools for Translating Sentences from English to Malayalam and Vice Versa. International Journal of Next-Generation Computing, 12(4).
  4. Casacuberta, F., & Vidal, E. (2004). Machine translation with inferred stochastic finite-state transducers. Computational Linguistics, 30(2), 205-225. https://doi.org/10.1162/089120104323093294
  5. Chen, C., & Chen, Y. (2005). Searching for clinical evidence in CiteSpace. In AMIA Annual Symposium Proceedings (Vol. 2005, p. 121). American Medical Informatics Association.
  6. Chen, C. (2016). CiteSpace: a practical guide for mapping scientific literature. Hauppauge, NY, USA: Nova Science Publishers.
  7. Chen, K., Wang, R., Utiyama, M., Sumita, E., Zhao, T., Yang, M., & Zhao, H. (2020). Towards more diverse input representation for neural machine translation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28, 1586-1597. https://doi.org/10.1109/TASLP.2020.2996077
  8. Dabre, R., Chu, C., & Kunchukuttan, A. (2020). A survey of multilingual neural machine translation. ACM Computing Surveys (CSUR), 53(5), 1-38. https://doi.org/10.1145/3406095
  9. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) (Vol. 1, pp. 4171-4186). Stroudsburg, PA: Association for Computational Linguistics.
  10. Dogru, G., & Moorkens, J. (2024). Data Augmentation with Translation Memories for Desktop Machine Translation Fine-tuning in 3 Language Pairs. The Journal of Specialised Translation, (41), 149-178. https://doi.org/10.26034/cm.jostrans.2024.4716
  11. Doherty, S., & Kenny, D. (2014). The design and evaluation of a statistical machine translation syllabus for translation students. The Interpreter and Translator Trainer, 8(2), 295-315. https://doi.org/10.1080/1750399X.2014.937571
  12. Federico, M., & Bertoldi, N. (2005). A word-to-phrase statistical translation model. ACM Transactions on Speech and Language Processing (TSLP), 2(2), 1-24. https://doi.org/10.1145/1115686.1115687
  13. Gaspari, F., Almaghout, H., & Doherty, S. (2015). A survey of machine translation competences: Insights for translation technology educators and practitioners. Perspectives, 23(3), 333-358. https://doi.org/10.1080/0907676X.2014.979842
  14. Gehring, J., Auli, M., Grangier, D., Yarats, D., & Dauphin, Y. N. (2017, July). Convolutional sequence to sequence learning. In International conference on machine learning (pp. 1243-1252). PMLR.
  15. Gong, Z., Zhong, P., & Hu, W. (2019). Diversity in machine learning. IEEE Access, 7, 64323-64350. https://doi.org/10.1109/ACCESS.2019.2917620
  16. Hutchins, B. (2004). Castells, regional news media and the information age. Continuum, 18(4), 577-590. https://doi.org/10.1080/1030431042000297680
  17. Jiang, K., & Lu, X. (2020, November). Natural language processing and its applications in machine translation: A diachronic review. In 2020 IEEE 3rd International Conference of Safe Production and Informatization (IICSPI) (pp. 210-214). IEEE. https://doi.org/10.1109/IICSPI51290.2020.9332458
  18. Jiang, S., & Chen, Z. (2023). Application of dynamic time warping optimization algorithm in speech recognition of machine translation. Heliyon, 9(11). https://doi.org/10.1016/j.heliyon.2023.e21625
  19. Juan, A., & Vidal, E. (2002). On the use of Bernoulli mixture models for text classification. Pattern Recognition, 35(12), 2705-2710. https://doi.org/10.1016/S0031-3203(01)00242-4
  20. Karyukin, V., Rakhimova, D., Karibayeva, A., Turganbayeva, A., & Turarbek, A. (2023). The neural machine translation models for the low-resource Kazakh-English language pair. PeerJ Computer Science, 9, e1224. https://doi.org/10.7717/peerj-cs.1224
  21. Kenny, D., & Doherty, S. (2014). Statistical machine translation in the translation curriculum: overcoming obstacles and empowering translators. The Interpreter and translator trainer, 8(2), 276-294. https://doi.org/10.1080/1750399X.2014.936112
  22. Khurana, D., Koli, A., Khatter, K., & Singh, S. (2023). Natural language processing: State of the art, current trends and challenges. Multimedia tools and applications, 82(3), 3713-3744. https://doi.org/10.1007/s11042-022-13428-4
  23. Klein, G., Kim, Y., Deng, Y., Senellart, J., & Rush, A. M. (2017). Opennmt: Open-source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810. https://doi.org/10.18653/v1/P17-4012
  24. Koehn, P., & Knowles, R. (2017). Six challenges for neural machine translation. In Proceedings of the 1st Workshop on Neural Machine Translation (pp. 28-39). Association for Computational Linguistics. https://doi.org/10.18653/v1/W17-3204
  25. Lalrempuii, C., Soni, B., & Pakray, P. (2021). An improved English-to-Mizo neural machine translation. Transactions on Asian and Low-Resource Language Information Processing, 20(4), 1-21. https://doi.org/10.1145/3445974
  26. Läubli, S., Castilho, S., Neubig, G., Sennrich, R., Shen, Q., & Toral, A. (2020). A set of recommendations for assessing human-machine parity in language translation. Journal of artificial intelligence research, 67, 653-672. https://doi.org/10.1613/jair.1.11371
  27. Lin, Y., Guo, D., Zhang, J., Chen, Z., & Yang, B. (2020). A unified framework for multilingual speech recognition in air traffic control systems. IEEE Transactions on Neural Networks and Learning Systems, 32(8), 3608-3620. https://doi.org/10.1109/TNNLS.2020.3015830
  28. Liu, X., Zeng, J., Wang, X., Wang, Z., & Su, J. (2024). Exploring iterative dual domain adaptation for neural machine translation. Knowledge-Based Systems, 283, 111-182. https://doi.org/10.1016/j.knosys.2023.111182
  29. Liu, Y., Gu, J., Goyal, N., Li, X., Edunov, S., Ghazvininejad, M., … & Zettlemoyer, L. (2020). Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8, 726-742. https://doi.org/10.1162/tacl_a_00343
  30. Mai, Y., & Yuan, X. (2024). Deep learning based optical network transmission application in Chinese English translation system in cloud computing environment. Optical and Quantum Electronics, 56(4), 598. https://doi.org/10.1007/s11082-024-06297-8
  31. Maimaiti, M., Liu, Y., Luan, H., & Sun, M. (2022). Data augmentation for low‐resource languages NMT guided by constrained sampling. International Journal of Intelligent Systems, 37(1), 30-51. https://doi.org/10.1002/int.22616
  32. Moorkens, J. (2017). Under pressure: translation in times of austerity. Perspectives, 25(3), 464-477. https://doi.org/10.1080/0907676X.2017.1285331
  33. Moorkens, J. (2018). What to expect from Neural Machine Translation: a practical in-class translation evaluation exercise. The Interpreter and Translator Trainer, 12(4), 375-387. https://doi.org/10.1080/1750399X.2018.1501639
  34. Nath, B., Sarkar, S., Das, S., & Mukhopadhyay, S. (2022). Neural machine translation for Indian language pair using hybrid attention mechanism. Innovations in Systems and Software Engineering, 1-9. https://doi.org/10.1007/s11334-021-00429-z
  35. Ott, M., Edunov, S., Baevski, A., Fan, A., Gross, S., Ng, N., … & Auli, M. (2019). fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations (pp. 48-53). Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-4009
  36. Pathak, A., Pakray, P., & Bentham, J. (2019). English-Mizo machine translation using neural and statistical approaches. Neural Computing and Applications, 31(11), 7615-7631. https://doi.org/10.1007/s00521-018-3601-3
  37. Pellicer, L. F. A. O., Ferreira, T. M., & Costa, A. H. R. (2023). Data augmentation techniques in natural language processing. Applied Soft Computing, 132, 109-803. https://doi.org/10.1016/j.asoc.2022.109803
  38. Peris, Á., Domingo, M., & Casacuberta, F. (2017). Interactive neural machine translation. Computer Speech & Language, 45, 201-220. https://doi.org/10.1016/j.csl.2016.12.003
  39. Post, M. (2018). A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers (pp. 186-191). https://doi.org/10.18653/v1/W18-6319
  40. Prates, M. O., Avelar, P. H., & Lamb, L. C. (2020). Assessing gender bias in machine translation: a case study with Google Translate. Neural Computing and Applications, 32, 6363-6381. https://doi.org/10.1007/s00521-019-04144-6
  41. Riemland, M. (2022). Translation and technocracy in development: defining the potentials and limitations of translation technology for Maya inclusion in Guatemalan development. Linguistica Antverpiensia, New Series-Themes in Translation Studies, 21. https://doi.org/10.52034/lanstts.v21i.729
  42. Riemland, M. (2023). Theorizing sustainable, low-resource MT in development settings: Pivot-based MT between Guatemala’s indigenous Mayan languages. Translation Spaces, 12(2), 231-254. https://doi.org/10.1075/ts.22018.rie
  43. Ruffo, P. (2023). Literary translators and technology: SCOT as a proactive and flexible approach. Perspectives, 1-15. https://doi.org/10.1080/0907676X.2023.2296797
  44. Sennrich, R., & Haddow, B. (2016). Linguistic input features improve neural machine translation. arXiv preprint arXiv:1606.02892. https://doi.org/10.18653/v1/W16-2209
  45. Sharma, V. K., Mittal, N., & Vidyarthi, A. (2022). Context-based translation for the out of vocabulary words applied to Hindi-English cross-lingual information retrieval. IETE Technical Review, 39(2), 276-285. https://doi.org/10.1080/02564602.2020.1843553
  46. Singh, S. M., & Singh, T. D. (2022). An empirical study of low-resource neural machine translation of manipuri in multilingual settings. Neural Computing and Applications, 34(17), 14823-14844. https://doi.org/10.1007/s00521-022-07337-8
  47. Thomas, A., & Sangeetha, S. (2019). An innovative hybrid approach for extracting named entities from unstructured text data. Computational Intelligence, 35(4), 799-826. https://doi.org/10.1111/coin.12214
  48. Touvron, H., Bojanowski, P., Caron, M., Cord, M., El-Nouby, A., Grave, E., … & Jégou, H. (2022). Resmlp: Feedforward networks for image classification with data-efficient training. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4), 5314-5321. https://doi.org/10.1109/TPAMI.2022.3206148
  49. Tutek, M., & Šnajder, J. (2022). Toward practical usage of the attention mechanism as a tool for interpretability. IEEE access, 10, 47011-47030. https://doi.org/10.1109/ACCESS.2022.3169772
  50. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
  51. Wang, H. (2023). Defending the last bastion: A sociological approach to the challenged literary translation. Babel, 69(4), 465-482.
  52. Wang, H., Wu, H., He, Z., Huang, L., & Church, K. W. (2022). Progress in machine translation. Engineering, 18, 143-153. https://doi.org/10.1016/j.eng.2021.03.023
  53. Wang, H., Xia, X., Lo, D., He, Q., Wang, X., & Grundy, J. (2021). Context-aware retrieval-based deep commit message generation. ACM Transactions on Software Engineering and Methodology (TOSEM), 30(4), 1-30. https://doi.org/10.1145/3464689
  54. Wang, R., Utiyama, M., Finch, A., Liu, L., Chen, K., & Sumita, E. (2018). Sentence selection and weighting for neural machine translation domain adaptation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(10), 1727-1741. https://doi.org/10.1109/TASLP.2018.2837223
  55. Wang, R., Zhao, H., Lu, B. L., Utiyama, M., & Sumita, E. (2015). Bilingual continuous-space language model growing for statistical machine translation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(7), 1209-1220. https://doi.org/10.1109/TASLP.2015.2425220
  56. Wang, S. (2023). Recognition of English speech-using a deep learning algorithm. Journal of Intelligent Systems, 32(1), 20220236. https://doi.org/10.1515/jisys-2022-0236
  57. Xue, Y., Chen, C., & Słowik, A. (2023). Neural architecture search based on a multi-objective evolutionary algorithm with probability stack. IEEE Transactions on Evolutionary Computation. https://doi.org/10.1109/TEVC.2023.3252612
  58. Yu, J., Li, J., Yu, Z., & Huang, Q. (2019). Multimodal transformer with multi-view visual representation for image captioning. IEEE transactions on circuits and systems for video technology, 30(12), 4467-4480. https://doi.org/10.1109/TCSVT.2019.2947482
  59. Zhang, J., Li, C., Liu, G., Min, M., Wang, C., Li, J., … & Chen, H. (2022). A CNN-transformer hybrid approach for decoding visual neural activity into text. Computer Methods and Programs in Biomedicine, 214, 106-586. https://doi.org/10.1016/j.cmpb.2021.106586
  60. Zhang, J., Tian, Y., Mao, J., Han, M., Wen, F., Guo, C., … & Matsumoto, T. (2023). WCC-JC 2.0: A Web-Crawled and Manually Aligned Parallel Corpus for Japanese-Chinese Neural Machine Translation. Electronics, 12(5), 1140. https://doi.org/10.3390/electronics12051140
  61. Zhang, J., Zhou, L., Zhao, Y., & Zong, C. (2020). Synchronous bidirectional inference for neural sequence generation. Artificial intelligence, 281, 103-234. https://doi.org/10.1016/j.artint.2020.103234
  62. Zhang, J., & Zong, C. (2015). Deep Neural Networks in Machine Translation: An Overview. IEEE Intell. Syst., 30(5), 16-25. https://doi.org/10.1109/MIS.2015.69
  63. Zheng, W., Liu, X., Ni, X., Yin, L., & Yang, B. (2021). Improving visual reasoning through semantic representation. IEEE access, 9, 91476-91486. https://doi.org/10.1109/ACCESS.2021.3074937
  64. Zhou, L., Zhang, J., & Zong, C. (2019). Synchronous bidirectional neural machine translation. Transactions of the Association for Computational Linguistics, 7, 91-105. https://doi.org/10.1162/tacl_a_00256

[email protected]

Follow us @