Remove 2016 Remove Computational Linguistics Remove Neural Network
article thumbnail

Selective Classification Can Magnify Disparities Across Groups

The Stanford AI Lab Blog

In Proceedings of the IEEE International Conference on Computer Vision, pp. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. In Association for Computational Linguistics (ACL), pp. Selective classification for deep neural networks.

article thumbnail

The State of Transfer Learning in NLP

Sebastian Ruder

This goes back to layer-wise training of early deep neural networks ( Hinton et al., From shallow to deep  Over the last years, state-of-the-art models in NLP have become progressively deeper. In contrast, current models like BERT-Large and GPT-2 consist of 24 Transformer blocks and recent models are even deeper.

NLP 75
article thumbnail

ML and NLP Research Highlights of 2021

Sebastian Ruder

Transactions of the Association for Computational Linguistics, 9, 978–994. Transactions of the Association for Computational Linguistics, 9, 570–585. In Advances in Neural Information Processing Systems 29 (NIPS 2016). link] ↩︎ Hendricks, L. Schneider, R., Alayrac, J. De Sa, C.,

NLP 52