Remove Big Data Remove OpenAI Remove White Paper
article thumbnail

OpenAI enhances AI safety with new red teaming methods

AI News

A critical part of OpenAI’s safeguarding process is “red teaming” — a structured methodology using both human and AI participants to explore potential risks and vulnerabilities in new systems. “We are optimistic that we can use more powerful AI to scale the discovery of model mistakes,” OpenAI stated. .

OpenAI 324
article thumbnail

This AI understands doctor’s notes: Truveta’s new model finds meaning in messy healthcare data

Flipboard

The Seattle-area healthcare technology startup introduced the Truveta Language Model in a recent preprint publication , and gave more background this week in a white paper and blog post. The model accomplishes these tasks with greater than 90% accuracy, the company says. The company also updates its datasets daily.