Remove 2018 Remove BERT Remove Computational Linguistics
article thumbnail

The Seven Trends in Machine Translation for 2019

NLP People

Hundreds of researchers, students, recruiters, and business professionals came to Brussels this November to learn about recent advances, and share their own findings, in computational linguistics and Natural Language Processing (NLP). According to what was discussed at WMT 2018 that might not be the case — at least not anytime soon.

BERT 52
article thumbnail

The State of Transfer Learning in NLP

Sebastian Ruder

2018 ; Akbik et al., 2018 ; Baevski et al., In contrast, current models like BERT-Large and GPT-2 consist of 24 Transformer blocks and recent models are even deeper. Given enough data, a large number of parameters, and enough compute, a model can do a reasonable job. 2018 ; Wang et al., 2019 ) of recent years.

NLP 75
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Reward Isn't Free: Supervising Robot Learning with Language and Video from the Web

The Stanford AI Lab Blog

Conference of the North American Chapter of the Association for Computational Linguistics. ↩ Devlin, J., BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Kalashnikov, D., Quillen, D., Kalakrishnan, M.,

article thumbnail

Explosion in 2019: Our Year in Review

Explosion

The update fixed outstanding bugs on the tracker, gave the docs a huge makeover, improved both speed and accuracy, made installation significantly easier and faster, and added some exciting new features, like ULMFit/BERT/ELMo-style language model pretraining. ✨ Mar 20: A few days later, we upgraded Prodigy to v1.8 to support spaCy v2.1.

NLP 52
article thumbnail

Multi-domain Multilingual Question Answering

Sebastian Ruder

Reading Comprehension assumes a gold paragraph is provided Standard approaches for reading comprehension build on pre-trained models such as BERT. Using BERT for reading comprehension involves fine-tuning it to predict a) whether a question is answerable and b) whether each token is the start and end of an answer span.

BERT 52
article thumbnail

The State of Multilingual AI

Sebastian Ruder

Research models such as BERT and T5 have become much more accessible while the latest generation of language and multi-modal models are demonstrating increasingly powerful capabilities. In Findings of the Association for Computational Linguistics: ACL 2022 (pp. RoBERTa: A Robustly Optimized BERT Pretraining Approach.

article thumbnail

ML and NLP Research Highlights of 2021

Sebastian Ruder

6] such as W2v-BERT [7] as well as more powerful multilingual models such as XLS-R [8]. For each input chunk, nearest neighbor chunks are retrieved using approximate nearest neighbor search based on BERT embedding similarity. W2v-BERT: Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-Training.

NLP 52