Methodology of Automatic Editing for Academic Writing Using Bidirectional RNN and Academic Dictionary

Younghoon Roh, Tai-Woo Chang, Jongwun Won

Abstract


Artificial intelligence-based natural language processing technology is playing an important role in helping users write English-language documents. For academic documents in particular, the English proofreading services should reflect the academic characteristics using formal style and technical terms. But the services usually does not because they are based on general English sentences. In addition, since existing studies are mainly for improving the grammatical completeness, there is a limit of fluency improvement. This study proposes an automatic academic English editing methodology to deliver the clear meaning of sentences based on the use of technical terms. The proposed methodology consists of two phases: misspell correction and fluency improvement. In the first phase, appropriate corrective words are provided according to the input typo and contexts. In the second phase, the fluency of the sentence is improved based on the automatic post-editing model of the bidirectional recurrent neural network that can learn from the pair of the original sentence and the edited sentence. Experiments were performed with actual English editing data, and the superiority of the proposed methodology was verified.

 

  

Full Text:

PDF

References


Allen, J. and Hogan, C., “Toward the development of a post editing module for raw machine translation output: A controlled language perspective,” Third International Controlled Language Applications Workshop, pp. 62-71, 2000.

Bahdanau, D., Cho, K., and Bengio, Y., “Neural machine translation by jointly learning to align and translate,” ArXiv:1409.0473, 2014.

Bailey, S., Academic writing: A handbook for international students. Routledge, 2014.

Bayer, J., Wierstra, D., Togelius, J., and Schmidhuber, J., “Evolving memory cell structures for sequence learning,” International Conference on Artificial Neural Networks, pp. 755-764, 2009.

Brill, E., and Moore, R. C., “An improved error model for noisy channel spelling correction,” Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pp. 286-293, 2000.

Brockett, C., Dolan, B., and Gamon, M., “Correcting ESL errors using phrasal SMT techniques,” Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, 2019.

Bryant, C., Felice, M., Andersen, Ø. E., and Briscoe, T., “The BEA-2019 shared task on grammatical error correction,” Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 52-75, 2019.

Cho, K., Van Merriënboer, B., Bahdanau, D., and Bengio, Y., “On the properties of neural machine translation: Encoder-decoder approaches,” ArXiv:1409.1259, 2014.

Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y., “Learning phrase representations using RNN encoder-decoder for statistical machine translation,” ArXiv:1406.1078, 2014.

Chollampatt, S. and Ng, H. T., “A multilayer convolutional encoder-decoder neural network for grammatical error correction,” Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32, No. 1, 2018.

Chung, J., Gulcehre, C., Cho, K., and Bengio, Y., “Empirical evaluation of gated recurrent neural networks on sequence modeling,” ArXiv:1412.3555, 2014.

Correia, G. M. and Martins, A. F., “A simple and effective approach to automatic post-editing with transfer learning,” ArXiv:1906.06253, 2019.

Damerau, F. J., “A technique for computer detection and correction of spelling errors,” Communications of the ACM, Vol. 7, No. 3, pp. 171-176, 1964.

Dowling, M., Lynn, T., Graham, Y., and Judge, J., “English to Irish machine translation with automatic post-editing”, Proceedings of the Conference: 2nd Celtic Language Technology Workshop, 2016.

Dugast, L., Senellart, J., and Koehn, P., “Statistical post-editing on systran’s rule-based translation system,” Proceedings of the Second Workshop on Statistical Machine Translation, pp. 220-223, 2007.

Edmundson, H. P. and Hays, D. G., “Research methodology for machine translation,” Mech. Transl. Comput. Linguistics, Vol. 5, No. 1, pp. 8-15, 1958.

Gamon, M., Leacock, C., Brockett, C., Dolan, W. B., Gao, J., Belenko, D., and Klementiev, A., “Using statistical techniques and web search to correct ESL errors,” Calico Journal, Vol. 26, No. 3, pp. 491-511, 2009.

Ge, T., Wei, F., and Zhou, M., “Fluency boost learning and inference for neural grammatical error correction,” Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 1. pp. 1055-1065, 2018.

Goodfellow, I., Warde-Farley, D., Mirza, M., Courville, A., and Bengio, Y., “Maxout networks,” Proceedings of the 30th International Conference on Machine Learning, pp. 1319-1327, 2013.

Graves, A., “Generating sequences with recurrent neural networks,” ArXiv:1308.0850, 2013.

Graves, A. and Schmidhuber, J., “Framewise phoneme classification with bidirectional LSTM and other neural network architectures,” Neural networks, Vol. 18, No. 5-6, pp. 602-610, 2005.

Graves, A., Fernandez, S., and Schmidhuber, J., “Multi-Dimensional Recurrent Neural Networks,” ArXiv:0705.2011, 2007.

Graves, A., Mohamed, A. R., and Hinton, G., “Speech recognition with deep recurrent neural networks,” 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6645-6649, 2013.

Greetham, B., How to write better essays. Macmillan International Higher Education, 2013.

Grundkiewicz, R., Junczys-Dowmunt, M., and Heafield, K., “Neural grammatical error correction systems with unsupervised pre-training on synthetic data,” Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 252-263, 2019.

Hu, C., Resnik, P., Kronrod, Y., Eidelman, V., Buzek, O., and Bederson, B. B., “The value of monolingual crowdsourcing in a real-world translation scenario: Simulation using Haitian Creole emergency SMS messages,” Proceedings of the Sixth Workshop on Statistical Machine Translation, pp. 399-404, 2011.

Hwang, S. and Kim, D., “BERT-based Classification Model for Korean Documents,” Journal of Society for e-Business Studies, Vol. 25, No. 1, pp. 203-214, 2020.

Junczys-Dowmunt, M. and Grundkiewicz, R., “MS-UEdin submission to the WMT2018 APE shared task: Dual-source transformer for automatic post-editing,” ArXiv:1809.00188, 2018.

Junczys-Dowmunt, M., Grundkiewicz, R., Guha, S., and Heafield, K., “Approaching neural grammatical error correction as a low-resource machine translation task,” ArXiv:1804.05940, 2018.

Kasewa, S., Stenetorp, P., and Riedel, S., “Wronging a right: Generating better errors to improve grammatical error detection,” ArXiv:1810.00668, 2018.

Khayrallah, H. and Koehn, P., “On the impact of various types of noise on neural machine translation,” ArXiv:1805.12282, 2018.

KMLE., Retrieved from KMLE website: http://www.kmle.co.kr/, 2021.

Knight, K. and Chander, I., “Automated postediting of documents,” AAAI-94 Proceedings, pp. 779-784, 1994.

Koehn, P., “Enabling monolingual translators: Post-editing vs. options,” Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 537-545, 2010.

Krings, H. P., Repairing texts: Empirical investigations of machine translation post-editing processes, Kent State University Press, 2001.

Lagarda, A. L., Alabau, V., Casacuberta, F., Silva, R., and Díaz-de-Liaño, E., “Statistical post-editing of a rule-based machine translation system,” Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pp. 217-220, 2009.

Lee, J. and Webster, J. J., “A corpus of textual revisions in second language writing,” Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, Vol. 2, pp. 248-252, 2012.

Lichtarge, J., Alberti, C., Kumar, S., Shazeer, N., Parmar, N., and Tong, S., “Corpora generation for grammatical error correction,” ArXiv:1904.05780, 2019.

Lopes, A. V., Farajian, M. A., Correia, G. M., Trénous, J., and Martins, A. F., “Unbabel’s Submission to the WMT2019 APE Shared Task: BERT-based Encoder-Decoder for Automatic Post-Editing,” ArXiv:1905.13068, 2019.

Madi, N. and Al-Khalifa, H. S., “Grammatical error checking systems: A review of approaches and emerging directions,” 2018 Thirteenth International Conference on Digital Information Management (ICDIM). pp. 142-147. IEEE, 2018.

Manning, C. and Schutze, H., “Foundations of statistical natural language processing,” MIT Press, 1999.

Mareček, D., Rosa, R., Galuščáková, P., and Bojar, O., “Two-step translation with grammatical post-processing,” Proceedings of the Sixth Workshop on Statistical Machine Translation, pp. 426-432, 2011.

Mitchell, L., Roturier, J., and O’Brien, S., “Community-based post-editing of machine-translated content: monolingual vs. bilingual,” Proceedings of the MT Summit Conference 2013, European Association for Machine Translation, 2013.

Mizumoto, T., Hayashibe, Y., Komachi, M., Nagata, M., and Matsumoto, Y., “The effect of learner corpus size in grammatical error correction of ESL writings,” Proceedings of COLING 2012: Posters, pp. 863-872, 2012.

Mutton, A., Dras, M., Wan, S., and Dale, R., “GLEU: Automatic evaluation of sentence-level fluency,” Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pp. 344-351, 2007.

Naber, D., A rule-based style and grammar checker, Universität Bielefeld, 2003.

Napoles, C., Sakaguchi, K., and Tetreault, J., “JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction”, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: 2, 2017.

Northedge, A. and Chambers, E., “The Arts Good Study Guide,” 1997.

Pal, S., Naskar, S. K., Vela, M., & van Genabith, J., “A neural network based approach to automatic post-editing,” Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Vol. 2, pp. 281-286, 2016.

Pal, S., Naskar, S. K., Vela, M., Liu, Q., and van Genabith, J., “Neural automatic post-editing using prior alignment and reranking,” Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Vol. 2, pp. 349-355, 2017.

Papineni, K., Roukos, S., Ward, T., and Zhu, W. J., “BLEU: A method for automatic evaluation of machine translation,” Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311-318, 2002.

Rosa, R., Mareček, D., and Dušek, O., “DEPFIX: A system for automatic correction of Czech MT outputs,” Proceedings of the Seventh Workshop on Statistical Machine Translation, pp. 362-368, 2012.

Ryan, J. P., “The role of the translator in making an MT system work: Perspective of a developer,” Technology as Translation Strategy. American Translators Association Scholarly Monograph Series, Vol. 2, pp. 127-132, 1988.

Schuster, M. and Paliwal, K. K., “Bidirectional recurrent neural networks,” IEEE Transactions on Signal Processing, Vol. 45, No. 11, pp. 2673-2681, 1997.

Schwartz, L., “Monolingual post-editing by a domain expert is highly effective for translation triage,” Proceedings of the Third Workshop on Post-Editing Technology and Practice, pp. 34-44, 2014.

Schwartz, L., Anderson, T., Gwinnup, J., and Young, K., “Machine translation and monolingual postediting: The AFRL WMT-14 system,” Proceedings of the Ninth Workshop on Statistical Machine Translation, pp. 186-194, 2014.

Simard, M., Ueffing, N., Isabelle, P., and Kuhn, R., “Rule-based translation with statistical phrase-based post-editing,” Proceedings of the Second Workshop on Statistical Machine Translation, pp. 203-206, 2007.

Smith, P., How To-Write an Assignment: Improving Your Research and Presentation Skills. How to Books, 1994.

Strong, S. I., “How to write law essays and exams,” Oxford University Press, 2018.

Sutskever, I., Vinyals, O., and Le, Q. V., “Sequence to sequence learning with neural networks,” ArXiv:1409.3215, 2014.

Wang, Y., Wang, Y., Liu, J., and Liu, Z., “A comprehensive survey of grammar error correction,” ArXiv:2005.06600, 2020.

Warburton, N., The basics of essay writing. Routledge, 2020.

Whitelaw, C., Hutchinson, B., Chung, G. Y., and Ellis, G., “Using the web for language independent spellchecking and autocorrection,” In EMNLP, pp. 890-899, 2009.

Wu, J. C., Chang, Y. C., Mitamura, T., and Chang, J. S., “Automatic collocation suggestion in academic writing,” Proceedings of the ACL 2010 Conference Short Papers, pp. 115-119, 2010.

Wu, S., Roberts, K., Datta, S., Du, J., Ji, Z., Si, Y., ... and Xu, H., “Deep learning in clinical natural language processing: a methodical review,” Journal of the American Medical Informatics Association, Vol. 27, No. 3, pp. 457-470, 2020.

Yimam, S. M., Venkatesh, G., Lee, J. S., and Biemann, C., “Automatic Compilation of Resources for Academic Writing and Evaluating with Informal Word Identification and Paraphrasing System,” Proceedings of The 12th Language Resources and Evaluation Conference, pp. 5896-5904, 2020.

Zeiler, M. D., “Adadelta: an adaptive learning rate method,” ArXiv:1212.5701, 2012.

Zhang, X., Zhao, J., and LeCun, Y., “Character-level convolutional networks for text classification,” ArXiv:1509.01626, 2015.

Zhao, W., Wang, L., Shen, K., Jia, R., and Liu, J., “Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data,” ArXiv:1903.00138, 2019.

Zhao, Z. and Wang, H., “MaskGEC: Improving neural grammatical error correction via dynamic masking,” In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 1226-1233, 2020.


Refbacks

  • There are currently no refbacks.