Remove 2020 Remove Computational Linguistics Remove NLP
article thumbnail

Common Flaws in NLP Evaluation Experiments

Ehud Reiter

The ReproHum project (where I am working with Anya Belz (PI) and Craig Thomson (RF) as well as many partner labs) is looking at the reproducibility of human evaluations in NLP. So User interface problems : Very few NLP papers give enough information about UIs to enable reviewers to check these for problems. Especially

NLP 261
article thumbnail

SQuARE: Towards Multi-Domain and Few-Shot Collaborating Question Answering Agents

ODSC - Open Data Science

QA is a critical area of research in NLP, with numerous applications such as virtual assistants, chatbots, customer support, and educational platforms. Examples are the ACL fellow award 2020 and the first Hessian LOEWE Distinguished Chair award (2,5 mil. She is currently the president of the Association of Computational Linguistics.

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

68 Summaries of Machine Learning and NLP Research

Marek Rei

They annotate a new test set of news data from 2020 and find that performance of certain models holds up very well and the field luckily hasn’t overfitted to the CoNLL 2003 test set. Computational Linguistics 2022. link] Developing a system for the detection of cognitive impairment based on linguistic features.

article thumbnail

ML and NLP Research Highlights of 2021

Sebastian Ruder

2021) 2021 saw many exciting advances in machine learning (ML) and natural language processing (NLP). If CNNs are pre-trained the same way as transformer models, they achieve competitive performance on many NLP tasks [28].   Popularized by GPT-3 [32] , prompting has emerged as a viable alternative input format for NLP models.

NLP 52
article thumbnail

A Gentle Introduction to GPTs

Mlearning.ai

GPT-3 is a autoregressive language model created by OpenAI, released in 2020 . OpenAI’s research paper on the GPT-3, “Language Models are Few-Shot Learners” was released in May 2020 and it outlined the fact that state-of-the-art GPT-3 generated text is nearly indistinguishable to that of the text written by humans. What is GPT-3?

article thumbnail

The State of Multilingual AI

Sebastian Ruder

At the same time, a wave of NLP startups has started to put this technology to practical use. I will be focusing on topics related to natural language processing (NLP) and African languages as these are the domains I am most familiar with. This post takes a closer look at how the AI community is faring in this endeavour.

article thumbnail

AI2 at ACL 2023

Allen AI

NLPositionality: Characterizing Design Biases of Datasets and Models Sebastin Santy, Jenny Liang, Ronan Le Bras*, Katharina Reinecke, Maarten Sap* Design biases in NLP systems, such as performance differences for different populations, often stem from their creator’s positionality, i.e., views and lived experiences shaped by identity and background.