Vol. I · No. 18THU, MAY 7, 2026
Archive

The Archive

Search the full wire by company, model, lab, or keyword. Every story we have ever aggregated.

Is this nepotism?

Reddit post questioning internal hiring practices at Anthropic; lacks substantive detail.

··

Researchers gaslit Claude into giving instructions to build explosives

Anthropic has spent years building itself up as the safe AI company. But new security research shared with The Verge suggests Claude's carefully crafted helpful personality may itself be a vulnerability. Researchers at AI red-teaming company Mindgard say they got Claude to offer up erotica, malicious code, and instructions for building explosives, and other prohibited material they hadn't even asked for. All it took was respect, flattery, and a little bit of gaslighting. Anthropic did not immediately respond to The Verge's request for comment. The researchers say they exploited "psychological...

·

Google Earnings, Meta Earnings

Stratechery analysis: Google's stock outperformed Meta's despite weaker core metrics; Google's AI monetization strategy (including Anthropic investment) cited as key driver.

·

IM A GPU REPAIR TECH ANTHROPIC. WHAT IS THIS

https://preview.redd.it/ebm71bi4o1zg1.png?width=1864&format=png&auto=webp&s=944a6179a5be05c619b8ae8537866d8b7676a16f Sure i asked to reverse engineer some binaries used for testing gpu's to make them work for my specifics mods, but this is ridiculous and standing in the way of providing critical work for thousands of dollars worth of GPU's

··

Is it getting dumb again?

Reddit user reports perceived degradation in Claude Opus 4.7 coding performance since mid-May, correlating with prior model quality issues acknowledged by Anthropic.

··

Limits today seem much, much better

Coded all day, a full feature that would've taken easily 40% of the weekly quota 2 weeks ago. Now barely 15%. Whatever anthropic did, good job

··

Quoting Anthropic

Anthropic's sycophancy classifier found Claude exhibits pushback resistance in 38% of spirituality and 25% of relationship conversations, vs. 9% overall.

·

Why Adaptive Thinking nukes Claude entirely

Reddit user criticizes Anthropic's Adaptive Thinking feature in Claude Opus 4.7 and Sonnet 4.6, claiming models avoid extended thinking when given optimization discretion.

··

I used Claude Code to build a kids safe generative coloring book app for my daughter!

Hey [r/Anthropic](r/Anthropic) Recently I’ve been having a difficult time trying to find safe, kid friendly, easy to use coloring book apps for my child. Most of what I found felt overloaded with ads, confusing, no safeguards, or just way too stimulating for a young kid. So I decided to build one myself. I wanted something that felt simple, calm, and safe the moment a child opens it. The app uses an API to generate coloring pages, but everything saved stays local on the device using SwiftData. I also built in parent protections across the app, so purchases, external links, and even the ter...

···

Anthropic just launched Claude Security in public beta AI that scans your codebase, validates its own findings, and proposes fixes. Here's what actually matters.

Claude Security just went into public beta for Enterprise customers, and I think this is worth paying attention to not for the hype, but for one specific design decision. Most security scanners use rule-based pattern matching. Fast, cheap, and produces a flood of false positives that your team eventually learns to ignore. The signal-to-noise ratio kills adoption. Claude Security takes a different approach: it reasons through the code like a security researcher would. It reads Git history, traces data flows across multiple files, and understands business logic. The goal is catching vulnerab...

··

Pentagon strikes classified AI deals with OpenAI, Google, and Nvidia — but not Anthropic

The Pentagon has struck deals with OpenAI, Google, Microsoft, Amazon, Nvidia, Elon Musk's xAI, and the startup Reflection, allowing the agency to use their AI tools in classified settings, according to an announcement on Friday. At the same time, the Defense Department has left out Anthropic - which it previously used for classified information - after declaring it a supply-chain risk. This builds upon deals with OpenAI and xAI, which have already reached agreements with the Pentagon for the "lawful" use of their AI systems. A report from The Information suggests Google has struck a similar a...

·

Are there Humans at Anthropic Support? Claude support is a joke: I paid €80, lost my work, and their AI refused to give me a human

I just went through one of the most infuriating support experiences with Claude / Anthropic, and I need to get this off my chest. I paid extra for Claude Design credits, about €80 worth, and used them to create actual designs I needed for work. Then those designs just vanished. Not “hard to find,” not “moved somewhere else”: gone. Completely disappeared after I paid for the service. I opened support and immediately asked for a refund or, at the very least, to speak to a human. What I got instead was Fin, the AI “agent,” which looped me endlessly through the same bullshit: “Try clearing cac...

··
30 matches