Common Contracts

1 similar null contracts

Yuanxin Liu1,2,3∗, Fandong Meng5, Zheng Lin1,4†, Jiangnan Li1,4, Peng Fu1, Yanan Cao1,4, Weiping Wang1, Jie Zhou5
October 9th, 2022
  • Filed
    October 9th, 2022

Despite the remarkable success of pre-trained language models (PLMs), they still face two challenges: First, large-scale PLMs are inefficient in terms of memory footprint and computation. Second, on the downstream tasks, PLMs tend to rely on the dataset bias and struggle to generalize to out-of-distribution (OOD) data. In response to the efficiency problem, recent studies show that dense PLMs can be replaced with sparse subnetworks without hurting the performance. Such subnetworks can be found in three scenarios: 1) the fine-tuned PLMs, 2) the raw PLMs and then fine-tuned in isolation, and even inside 3) PLMs without any parameter fine-tuning. However, these results are only obtained in the in- distribution (ID) setting. In this paper, we extend the study on PLMs subnetworks to the OOD setting, investigating whether sparsity and robustness to dataset bias can be achieved simultaneously. To this end, we conduct extensive experiments with the pre-trained BERT model on three natural langu

AutoNDA by SimpleDocs
Time is Money Join Law Insider Premium to draft better contracts faster.