Remove 2022 Remove BERT Remove Computational Linguistics
article thumbnail

Stanford AI Lab Papers and Talks at ACL 2022

The Stanford AI Lab Blog

The 60th Annual Meeting of the Association for Computational Linguistics (ACL) 2022 is taking place May 22nd - May 27th. We’re excited to share all the work from SAIL that’s being presented, and you’ll find links to papers, videos and blogs below.

article thumbnail

2022: We reviewed this year’s AI breakthroughs

Applied Data Science

Just wait until you hear what happened in 2022. Dall-e , and pre-2022 tools in general, attributed their success either to the use of the Transformer or Generative Adversarial Networks. In 2022 we got diffusion models ( NeurIPS paper ). This was one of the first appearances of an AI model used for Text-to-Image generation.

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

The State of Multilingual AI

Sebastian Ruder

Research models such as BERT and T5 have become much more accessible while the latest generation of language and multi-modal models are demonstrating increasingly powerful capabilities. This post is partially based on a keynote I gave at the Deep Learning Indaba 2022. The Deep Learning Indaba 2022 in Tunesia.

article thumbnail

68 Summaries of Machine Learning and NLP Research

Marek Rei

EMNLP 2022. EMNLP 2022. NeurIPS 2022. EMNLP 2022. EMNLP 2022. They show performance improvements in some settings and speed improvements in all evaluated settings, showing particular usefulness in settings where the LLM needs to retrieve information about multiple entities (e.g. UC Berkeley, CMU. Google Research.

article thumbnail

All Languages Are NOT Created (Tokenized) Equal

Topbots

70% of research papers published in a computational linguistics conference only evaluated English.[ In Findings of the Association for Computational Linguistics: ACL 2022 , pages 2340–2354, Dublin, Ireland. Association for Computational Linguistics. Association for Computational Linguistics.

article thumbnail

ML and NLP Research Highlights of 2021

Sebastian Ruder

6] such as W2v-BERT [7] as well as more powerful multilingual models such as XLS-R [8]. For each input chunk, nearest neighbor chunks are retrieved using approximate nearest neighbor search based on BERT embedding similarity. For more work on this topic, check out the EvoNLP workshop at EMNLP 2022. Why is it important?  

NLP 52