On The Cross-Modal Transfer from Natural Language to Code through Adapter Modules

Abstract

Pre-trained neural Language Models (PTLM), such as CodeBERT, are recently used in software engineering as models pre-trained on large source code corpora. Their knowledge is transferred to downstream tasks (e.g. code clone detection) via fine-tuning. In natural language processing (NLP), other alternatives for transferring the knowledge of PTLMs are explored through using adapters, compact, parameter efficient modules inserted in the layers of the PTLM. Although adapters are known to facilitate adapting to many downstream tasks compared to fine-tuning the model that require retraining all of the models’ parameters – which owes to the adapters’ plug and play nature and being parameter efficient – their usage in software engineering is not explored.

Publication
30th International Conference on Program Comprehension 2022