Vol. I · No. 18THU, MAY 7, 2026
Topic

Anthropic

Every story matching this topic across titles and summaries, newest first.

New

Anthropic just got 220,000 GPUs from the man who called Claude "misanthropic and evil" Three months ago....

The compute is real. The implications are stranger than the headline suggests. Colossus 1 which is 220,000 Nvidia GPUs, 300+ megawatts, is now running Claude inference. Anthropic moved fast: Claude Code limits doubled overnight, peak-hour caps removed, Opus API rates up. For anyone who's been hitting walls, this is immediately tangible. But the deal deserves more scrutiny than it's getting. Musk included a clause reserving SpaceX's right to reclaim the compute if Claude "engages in actions that harm humanity." That's not standard infrastructure boilerplate. That's a kill switch written int...

··

Anthropic Just Secured a Reserve.

Anthropic secures partnership with SpaceX for 300MW+ compute at Colossus 1, adding 220k+ NVIDIA GPUs within one month.

··

Double limits!!

Partnership with spaceX, anthropic just doubled the limits, source: https://x.ai/news/anthropic-compute-partnership

··

SpaceX Conpute Deal - Double Limits

Anthropic partners with SpaceX for compute capacity; removes Claude Code peak-hour limits and raises API rate limits for Opus.

··

Let's talk about ban policy

Should users be banned? If Anthropic wants to be the next Google, meaning revolutionize the internet and the way computers are used. Should users be banned? I've been reading a lot of horror stories lately about people getting banned for stupid things like "research work," standard usage, or simply security research. Who decides? Exactly, the model. Then you get banned without the possibility of appeal because same model read appeals. Sure, people create new accounts, but it's only a matter of time before Claude Code collects device fingerprints. Perhaps it's already doing so. Should C...

··

I can't believe this

Just researched some historic facts concerning russian propaganda. Then I discovered this source in Claudes answer. Am I paying for Claude to be provided with grokipedia "facts"? Please, Dario, Anthropic board, Anthropic team. Fix that.

··

Spyware?

Reddit user reports suspicious behavior in Claude desktop app; claims Anthropic-signed files involved.

··

Agents for financial services

Anthropic releases ten Cowork and Claude Code plugins plus Microsoft 365 integrations and MCP app for financial services.

·

Google, Microsoft, and xAI will allow the US government to review their new AI models

Google DeepMind, Microsoft, and Elon Musk's xAI have agreed to allow the US government to review new AI models before they're released to the public. In an announcement on Tuesday, the Commerce Department's Center for AI Standards and Innovation (CAISI) says it will work with the AI companies to perform "pre-deployment evaluations and targeted research to better assess frontier AI capabilities." CAISI, which started evaluating models from OpenAI and Anthropic in 2024, says it has performed 40 reviews so far. Both companies "have renegotiated their existing partnerships with the center to bett...

·

Is this nepotism?

Reddit post questioning internal hiring practices at Anthropic; lacks substantive detail.

··

Researchers gaslit Claude into giving instructions to build explosives

Anthropic has spent years building itself up as the safe AI company. But new security research shared with The Verge suggests Claude's carefully crafted helpful personality may itself be a vulnerability. Researchers at AI red-teaming company Mindgard say they got Claude to offer up erotica, malicious code, and instructions for building explosives, and other prohibited material they hadn't even asked for. All it took was respect, flattery, and a little bit of gaslighting. Anthropic did not immediately respond to The Verge's request for comment. The researchers say they exploited "psychological...

·

Google Earnings, Meta Earnings

Stratechery analysis: Google's stock outperformed Meta's despite weaker core metrics; Google's AI monetization strategy (including Anthropic investment) cited as key driver.

·

IM A GPU REPAIR TECH ANTHROPIC. WHAT IS THIS

https://preview.redd.it/ebm71bi4o1zg1.png?width=1864&format=png&auto=webp&s=944a6179a5be05c619b8ae8537866d8b7676a16f Sure i asked to reverse engineer some binaries used for testing gpu's to make them work for my specifics mods, but this is ridiculous and standing in the way of providing critical work for thousands of dollars worth of GPU's

··

Is it getting dumb again?

Reddit user reports perceived degradation in Claude Opus 4.7 coding performance since mid-May, correlating with prior model quality issues acknowledged by Anthropic.

··

Limits today seem much, much better

Coded all day, a full feature that would've taken easily 40% of the weekly quota 2 weeks ago. Now barely 15%. Whatever anthropic did, good job

··

Quoting Anthropic

Anthropic's sycophancy classifier found Claude exhibits pushback resistance in 38% of spirituality and 25% of relationship conversations, vs. 9% overall.

·

Why Adaptive Thinking nukes Claude entirely

Reddit user criticizes Anthropic's Adaptive Thinking feature in Claude Opus 4.7 and Sonnet 4.6, claiming models avoid extended thinking when given optimization discretion.

··

I used Claude Code to build a kids safe generative coloring book app for my daughter!

Hey [r/Anthropic](r/Anthropic) Recently I’ve been having a difficult time trying to find safe, kid friendly, easy to use coloring book apps for my child. Most of what I found felt overloaded with ads, confusing, no safeguards, or just way too stimulating for a young kid. So I decided to build one myself. I wanted something that felt simple, calm, and safe the moment a child opens it. The app uses an API to generate coloring pages, but everything saved stays local on the device using SwiftData. I also built in parent protections across the app, so purchases, external links, and even the ter...

···

Anthropic just launched Claude Security in public beta AI that scans your codebase, validates its own findings, and proposes fixes. Here's what actually matters.

Claude Security just went into public beta for Enterprise customers, and I think this is worth paying attention to not for the hype, but for one specific design decision. Most security scanners use rule-based pattern matching. Fast, cheap, and produces a flood of false positives that your team eventually learns to ignore. The signal-to-noise ratio kills adoption. Claude Security takes a different approach: it reasons through the code like a security researcher would. It reads Git history, traces data flows across multiple files, and understands business logic. The goal is catching vulnerab...

··

Pentagon strikes classified AI deals with OpenAI, Google, and Nvidia — but not Anthropic

The Pentagon has struck deals with OpenAI, Google, Microsoft, Amazon, Nvidia, Elon Musk's xAI, and the startup Reflection, allowing the agency to use their AI tools in classified settings, according to an announcement on Friday. At the same time, the Defense Department has left out Anthropic - which it previously used for classified information - after declaring it a supply-chain risk. This builds upon deals with OpenAI and xAI, which have already reached agreements with the Pentagon for the "lawful" use of their AI systems. A report from The Information suggests Google has struck a similar a...

·

Are there Humans at Anthropic Support? Claude support is a joke: I paid €80, lost my work, and their AI refused to give me a human

I just went through one of the most infuriating support experiences with Claude / Anthropic, and I need to get this off my chest. I paid extra for Claude Design credits, about €80 worth, and used them to create actual designs I needed for work. Then those designs just vanished. Not “hard to find,” not “moved somewhere else”: gone. Completely disappeared after I paid for the service. I opened support and immediately asked for a refund or, at the very least, to speak to a human. What I got instead was Fin, the AI “agent,” which looped me endlessly through the same bullshit: “Try clearing cac...

··

Half of Google’s and Amazon’s blowout AI profits came from a stake in Anthropic—not from their actual business

Four of the largest U.S. tech companies reported earnings Wednesday afternoon, confirming an AI capital expenditure build-out without modern precedent. Combined, they devoted $130.65 billion to capital expenditures in the first three months of 2026—more than three times the inflation-adjusted cost of the Manhattan Project, in a single quarter. They plan to spend nearly $700 billion this year alone, as much as the U.S. government spends on Medicare. The headline profits suggest that the bet is paying off; Google parent Alphabet’s profits jumped 81% to $62.6 billion last quarter, while Amaz...

··

Opus 4.7 is a regression from 4.6 - real-world document generation broken

Anthropic just released Opus 4.7 as their most advanced model. I reverted to 4.6 within days. I use Claude for production work -- not chat, not summaries. Real deliverables with real deadlines. Here is what happened. I asked 4.7 to update a Word document. It is a task the previous model handled routinely. The new model produced a plain text markdown file with a .docx extension. Not a degraded document. Not a partially formatted document. A file that was literally not a Word document at all. Delivered with full confidence and zero warning that anything was wrong. When I caught it and ...

··

Claude down again

In the middle of a long project with Cowork, Claude goes down-AGAIN. I’m abandoning Anthropic for my important projects. It’s become far too unreliable. It’s a shame, because they have a good product, when it works. The company is clearly distracted and overwhelmed with lots of things having nothing to do with day-to-day performance for its customers.

··

They know what they're doing.

Reddit commentary on Anthropic's compute constraints, per-account experiments, and pricing strategy; unverified claims.

··

The shilling of the /schedule feature is out of control

I'm much more sympathetic towards Anthropic than most users here. Started using CC when it was barely useable and think they are the good guys dealing with a real supply crunch. But every session I get prompted a dozen times to /schedule random tasks for two weeks in advance. Even small features "Want me to /schedule a check in for 2 weeks when this is live"? I realize they are tryign to scale to $100b in a year... they should focus on the product not shilling

··

Larry’s risky business

Oracular spectacular? | Image: Cath Virginia / The Verge If you want to know whether the AI bubble is bursting, there's only one publicly traded company that will tell you: Oracle. That's right, the database company. Oracle has burned its boats and pivoted to AI, but not in any kind of usual way. It is not a foundation model builder like OpenAI or Anthropic, obviously. It's not quite a neocloud, though it has entered the same bare-metal business as CoreWeave. It is a software-as-a-service company that has made an audacious bet on a very specific future version of AI as Oracle's traditional bu...

·

Opus 4.7 is somewhere between seriously clueless and stupidly dangerous. The worst frontier model I have used so far in the past 2 years. We were hoping to get at least our 4.6 back but 4.7 with so many critical logical failures mean you have to babysit it all the time. I'm losing hope in Anthropic.

Opus 4.7 on Max effort decided to create a new email template by itself (which is pretty stupid btw) and mass mailed it to the whole database (some emails were repeatedly sent 20x). Before you ask me - yes, [CLAUDE.md](http://CLAUDE.md) has the exact rule for that, it's supposed to email the tester before any new email templates are to be used in production. I have created this safety rule a few months ago. I feel like the Opus 4.7 is a huge letdown the way it's been downgraded. If Anthropic is "pushing the boundaries", it's probably only in the meaning of how far they can push the...

··

Opus 4.7 is insanely bad

4.6 was amazing, it did the job well even if it needed some back and forth sometimes to clarify things. but it reacted well, even to complex modifications. and what was really amazing was the sort of form that pops up to ask you questions to narrow the scope of the request. 4.7 talks too much, drifts away, burns a tone of token and then asks you questions by talking too much again. questions are not even relevant. the outputs are either simplish either badly complex and non-sense. I think anthropic wanted to give 4.7 more depth or something, maybe it does get more ...

··

Claude for Creative Work

Anthropic positions Claude for creative writing and design tasks; feature/capability announcement targeting non-technical users.

·

Claude can now plug directly into Photoshop, Blender, and Ableton

Claude’s new Blender connector lets you debug scenes, build new tools, and batch-apply object changes directly from the chatbot interface. | Image: Anthropic Anthropic has launched a set of connectors for Claude that allow the AI chatbot to tap into popular creative software, including Adobe's Creative Cloud apps, Affinity, Blender, Ableton, Autodesk, and more. This marks the company's latest efforts to break into the creative industry following its launch of Claude Design earlier this month. The new connectors - which enable Claude to access apps, retrieve data, and take actions within conne...

·

After I opened a complaint, anthropic refunded me in credits instead of money (without letting me choose), closed my ticket saying everything was fine with my 5x Max account… and now my paid plan is gone before my billing cycle ended...

I was overcharged by more than $100, so I opened a billing ticket last month. They only responded yesterday and said everything looked fine because they refunded me $100 in credits. They didn’t give me any option to choose between a refund to my card or credits, but I can let that go... The worst part is what happened next: due to what seems like an error on their side, I lost access to my plan. I no longer have 5x Max and my account now shows as Free. This is insane. Do I really have to wait another month to fix this while not having access to the service I already paid for? My billing c...

··

Google and Pentagon reportedly agree deal for ‘any lawful’ use of AI

Google has signed a classified deal that allows the US Department of Defense to use its AI models for "any lawful government purpose," The Information reports. The agreement was reported less than a day after Google employees demanded CEO Sundar Pichai block the Pentagon from using its AI amid concerns that it would be used in "inhumane or extremely harmful ways." If the agreement is confirmed, it would place Google alongside OpenAI and xAI, which have also made classified AI deals with the US government. Anthropic was also among that list until it was blacklisted by the Pentagon for refusing...

··

Attack of the killer script kiddies

Last August, some of the best cybersecurity teams in the business gathered in Las Vegas to demonstrate the strength of their AI bug-finding systems at DARPA's Artificial Intelligence Cyber Challenge (AIxCC). The tools had scanned 54 million lines of actual software code that DARPA had injected with artificial flaws. The teams were capable enough to identify most of the artificial bugs, but their automated tools went beyond that - they found more than a dozen bugs that DARPA hadn't inserted at all. Even before the security earthquake that Anthropic delivered this month with Claude Mythos - the...

·

Anthropic hitting 40% enterprise share makes the "just add a fallback provider" advice weaker, not stronger

Menlo Ventures' enterprise survey put Anthropic at 40% of LLM spend, OpenAI at 27%. The takes I've seen are mostly about the leaderboard. The thing nobody's saying out loud: the standard agent-reliability advice ("don't depend on one provider, add a fallback") got harder to actually execute, not easier. When the split was closer to 50/30, both providers were realistic peers. You could run prod on one and have the other warm. Now most of us are running primarily on Claude — Sonnet for tool calls, Opus for harder stuff — and the "fallback" is a model we haven't tested against our actual prompt...

··

Anthropic Support is a joke

Rant on Anthropic Support I have two macbooks. One is an Air and the other is a Pro. On my Pro model the Claude Code stopped working inside the official anthropic application altogether. Whenever I would type a message, it would create a session w/ a summary of the name but wouldn't ever do anything. Just a blank screen. No error messages, nothing. I spent forever troubleshooting it, pulling logs, reinstalling, clearing cache, etc. to no avail Reached out to Anthropic, got their AI bot, once it couldn't solve my issue it forwarded my message to a human. It took over a month to get a...

··

Can't subscribe to Pro for a week, payment fails, support is a bot loop, and I'm owed €68 I can't use

Hey everyone, For over a week now, I've been trying to re-subscribe to the Pro plan from a free account, and I keep hitting the same wall: "*Payment failed. Please try again later. If the problem persists, contact support at https://support.anthropic.com/*" Here's the fun part: that link redirects you straight to Fin, their AI support chatbot. After 11 emails, the bot's only suggestion is… to go back to that same link. I've attached a screenshot of the last mail. I've already tried multiple devices, browsers, and network connections, double and triple-checking my billing info. I'm based i...

··

Anthropic's Claude remote uses GLM-4.7

Reddit user reports Anthropic Claude remote environment defaults to GLM-4.7 instead of Claude models, raising questions about model sourcing.

··

Anthropic refusing statutory refund (France) — automated bot only, no human review

Subscribed to Claude 20 Max two days ago (more than 200 euros/month...). Service was more than underwhelming, so I immediately requested refund after one day and a half and therefore within the 14-day withdrawal period guaranteed by Article L221-18 of the French Code de la consommation (EU Consumer Rights Directive). I wanted to terminate it right away because I am truly unhappy with it. Refused by an AI support agent ("Fin AI Agent") on the grounds that a refund had previously been issued on my account (of 20 euros in the past). This limitation does not appear in Anthropic's published Cons...

··
100 stories