Vol. I · No. 18THU, MAY 7, 2026
Archive

The Archive

Search the full wire by company, model, lab, or keyword. Every story we have ever aggregated.

[BUG] Claude Max 5x Gift silently downgraded to Free after 12 days — Receipt #2227-0776-8866 (paid April 9, 2026)

**TL;DR**: Bought Claude Max 5x Gift on April 9, 2026. Worked fine for 12 days, then silently downgraded to Free plan with zero notification. Billing still shows “Paid $0.00” and valid until May 9. This is a widespread known bug. The real problem is support: \- [https://support.anthropic.com](https://support.anthropic.com) has no **ticket form**, no way to submit a real request for regular users. \- The only way to contact support is through the in-app “Get Help” chat on [claude.ai](http://claude.ai) — which is just useless Fin AI Agent. \- Once your account is downgraded to Free, h...

··

Product Management Interview @ Anthropic

Hi all, Apologies if this isn’t the right place to ask, but I’ve seen here quite a few interview-related posts here and wanted to ask whether anyone has experience interviewing for PM roles at Anthropic? I’d be especially interested in hearing how the process felt overall and what types of questions you were asked. I recently had an unexpected outreach from their recruiter, and we had a really good conversation about roles on their safeguards team, so I’m considering moving forward. Appreciate any insights, thanks in advance!

··

Anthropic’s new cybersecurity model could get it back in the government’s good graces

The Trump administration has spent nearly two months fighting with AI company Anthropic. It's dubbed the company a "RADICAL LEFT, WOKE COMPANY" full of "Leftwing nut jobs" and a menace to national security. But some of the ice may reportedly be melting between the two, thanks to Anthropic's buzzy new cybersecurity-focused model: Claude Mythos Preview. Anthropic's relationship with the Pentagon soured quickly in late February after the company refused to budge on two red lines: using its technology for domestic mass surveillance or lethal fully autonomous weapons with no human in the loop. Ant...

·

Tokenmaxxing, OpenAI’s shopping spree, and the AI Anxiety Gap

The gap between AI insiders and everyone else is widening, and the spending, suspicion, and even new vocabulary are starting to show it. While OpenAI is busy buying up everything from finance apps to talk shows, a certain shoe company just rebranded as an AI infrastructure play, and Anthropic unveiled a model it says is too powerful to release publicly …but apparently not too […]

·

Are we tokenmaxxing our way to nowhere?

The gap between AI insiders and everyone else is widening, and the spending, suspicion, and even new vocabulary are starting to show it. While OpenAI is busy buying up everything from finance apps to talk shows, a certain shoe company just rebranded as an AI infrastructure play, and Anthropic unveiled a model it says is too powerful to release publicly …but apparently not too […]

·

llm-anthropic 0.25

llm-anthropic 0.25 adds Claude Opus 4.7 support with thinking_effort and thinking_display options.

·

Introducing Claude Opus 4.7

Anthropic releases Claude Opus 4.7 with improved coding, agents, vision, and multi-step reasoning capabilities.

·

Why having “humans in the loop” in an AI war is an illusion

The availability of artificial intelligence for use in warfare is at the center of a legal battle between Anthropic and the Pentagon. This debate has become urgent, with AI playing a bigger role than ever before in the current conflict with Iran. AI is no longer just helping humans analyze intelligence. It is now an…

·
30 matches