Vol. I · No. 19FRI, MAY 8, 2026
Archive

The Archive

Search the full wire by company, model, lab, or keyword. Every story we have ever aggregated.

OpenAI and journalism

We support journalism, partner with news organizations, and believe The New York Times lawsuit is without merit.

·

OpenAI Red Teaming Network

We’re announcing an open call for the OpenAI Red Teaming Network and invite domain experts interested in improving the safety of OpenAI’s models to join our efforts.

·

Moving AI governance forward

OpenAI and other leading labs reinforce AI safety, security and trustworthiness through voluntary commitments.

·

Introducing OpenAI London

We are excited to announce OpenAI’s first international expansion with a new office in London, United Kingdom.

·

Testimony before the U.S. Senate

The following is the written testimony of Sam Altman, Chief Executive Officer of OpenAI, before the U.S. Senate Committee on the Judiciary (Subcommittee on Privacy, Technology, & the Law).

·

OpenAI Cybersecurity Grant Program

Our goal is to facilitate the development of AI-powered cybersecurity capabilities for defenders through grants and other support.

·

Democratic inputs to AI

Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.

·

Announcing OpenAI’s Bug Bounty Program

This initiative is essential to our commitment to develop safe and advanced AI. As we create technology and services that are secure, reliable, and trustworthy, we need your help.

·

GPT-4

We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.

·

Forecasting potential misuses of language models for disinformation campaigns and how to reduce risk

OpenAI researchers collaborated with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory to investigate how large language models might be misused for disinformation purposes. The collaboration included an October 2021 workshop bringing together 30 disinformation researchers, machine learning experts, and policy analysts, and culminated in a co-authored report building on more than a year of research. This report outlines the threats that language models pose to the information environment if used to augment disinformation campaigns and in...

·

New and improved content moderation tooling

We are introducing a new and improved content moderation tool. The Moderation endpoint improves upon our previous content filter, and is available for free today to OpenAI API developers.

·

OpenAI leadership team update

We’re happy to announce several executive role changes that reflect our recent progress and will ensure continued momentum toward our next major milestones.

·

Measuring Goodhart’s law

Goodhart’s law famously says: “When a measure becomes a target, it ceases to be a good measure.” Although originally from economics, it’s something we have to grapple with at OpenAI when figuring out how to optimize objectives that are difficult or costly to measure.

·

Introducing text and code embeddings

We are introducing embeddings, a new endpoint in the OpenAI API that makes it easy to perform natural language and code tasks like semantic search, clustering, topic modeling, and classification.

·
30 matches