Vol. I · No. 18THU, MAY 7, 2026
Archive

The Archive

Search the full wire by company, model, lab, or keyword. Every story we have ever aggregated.

Half of Google’s and Amazon’s blowout AI profits came from a stake in Anthropic—not from their actual business

Four of the largest U.S. tech companies reported earnings Wednesday afternoon, confirming an AI capital expenditure build-out without modern precedent. Combined, they devoted $130.65 billion to capital expenditures in the first three months of 2026—more than three times the inflation-adjusted cost of the Manhattan Project, in a single quarter. They plan to spend nearly $700 billion this year alone, as much as the U.S. government spends on Medicare. The headline profits suggest that the bet is paying off; Google parent Alphabet’s profits jumped 81% to $62.6 billion last quarter, while Amaz...

··

Opus 4.7 is a regression from 4.6 - real-world document generation broken

Anthropic just released Opus 4.7 as their most advanced model. I reverted to 4.6 within days. I use Claude for production work -- not chat, not summaries. Real deliverables with real deadlines. Here is what happened. I asked 4.7 to update a Word document. It is a task the previous model handled routinely. The new model produced a plain text markdown file with a .docx extension. Not a degraded document. Not a partially formatted document. A file that was literally not a Word document at all. Delivered with full confidence and zero warning that anything was wrong. When I caught it and ...

··

Claude down again

In the middle of a long project with Cowork, Claude goes down-AGAIN. I’m abandoning Anthropic for my important projects. It’s become far too unreliable. It’s a shame, because they have a good product, when it works. The company is clearly distracted and overwhelmed with lots of things having nothing to do with day-to-day performance for its customers.

··

They know what they're doing.

Reddit commentary on Anthropic's compute constraints, per-account experiments, and pricing strategy; unverified claims.

··

The shilling of the /schedule feature is out of control

I'm much more sympathetic towards Anthropic than most users here. Started using CC when it was barely useable and think they are the good guys dealing with a real supply crunch. But every session I get prompted a dozen times to /schedule random tasks for two weeks in advance. Even small features "Want me to /schedule a check in for 2 weeks when this is live"? I realize they are tryign to scale to $100b in a year... they should focus on the product not shilling

··

Larry’s risky business

Oracular spectacular? | Image: Cath Virginia / The Verge If you want to know whether the AI bubble is bursting, there's only one publicly traded company that will tell you: Oracle. That's right, the database company. Oracle has burned its boats and pivoted to AI, but not in any kind of usual way. It is not a foundation model builder like OpenAI or Anthropic, obviously. It's not quite a neocloud, though it has entered the same bare-metal business as CoreWeave. It is a software-as-a-service company that has made an audacious bet on a very specific future version of AI as Oracle's traditional bu...

·

Opus 4.7 is somewhere between seriously clueless and stupidly dangerous. The worst frontier model I have used so far in the past 2 years. We were hoping to get at least our 4.6 back but 4.7 with so many critical logical failures mean you have to babysit it all the time. I'm losing hope in Anthropic.

Opus 4.7 on Max effort decided to create a new email template by itself (which is pretty stupid btw) and mass mailed it to the whole database (some emails were repeatedly sent 20x). Before you ask me - yes, [CLAUDE.md](http://CLAUDE.md) has the exact rule for that, it's supposed to email the tester before any new email templates are to be used in production. I have created this safety rule a few months ago. I feel like the Opus 4.7 is a huge letdown the way it's been downgraded. If Anthropic is "pushing the boundaries", it's probably only in the meaning of how far they can push the...

··

Opus 4.7 is insanely bad

4.6 was amazing, it did the job well even if it needed some back and forth sometimes to clarify things. but it reacted well, even to complex modifications. and what was really amazing was the sort of form that pops up to ask you questions to narrow the scope of the request. 4.7 talks too much, drifts away, burns a tone of token and then asks you questions by talking too much again. questions are not even relevant. the outputs are either simplish either badly complex and non-sense. I think anthropic wanted to give 4.7 more depth or something, maybe it does get more ...

··

Claude for Creative Work

Anthropic positions Claude for creative writing and design tasks; feature/capability announcement targeting non-technical users.

·

Claude can now plug directly into Photoshop, Blender, and Ableton

Claude’s new Blender connector lets you debug scenes, build new tools, and batch-apply object changes directly from the chatbot interface. | Image: Anthropic Anthropic has launched a set of connectors for Claude that allow the AI chatbot to tap into popular creative software, including Adobe's Creative Cloud apps, Affinity, Blender, Ableton, Autodesk, and more. This marks the company's latest efforts to break into the creative industry following its launch of Claude Design earlier this month. The new connectors - which enable Claude to access apps, retrieve data, and take actions within conne...

·

After I opened a complaint, anthropic refunded me in credits instead of money (without letting me choose), closed my ticket saying everything was fine with my 5x Max account… and now my paid plan is gone before my billing cycle ended...

I was overcharged by more than $100, so I opened a billing ticket last month. They only responded yesterday and said everything looked fine because they refunded me $100 in credits. They didn’t give me any option to choose between a refund to my card or credits, but I can let that go... The worst part is what happened next: due to what seems like an error on their side, I lost access to my plan. I no longer have 5x Max and my account now shows as Free. This is insane. Do I really have to wait another month to fix this while not having access to the service I already paid for? My billing c...

··

Google and Pentagon reportedly agree deal for ‘any lawful’ use of AI

Google has signed a classified deal that allows the US Department of Defense to use its AI models for "any lawful government purpose," The Information reports. The agreement was reported less than a day after Google employees demanded CEO Sundar Pichai block the Pentagon from using its AI amid concerns that it would be used in "inhumane or extremely harmful ways." If the agreement is confirmed, it would place Google alongside OpenAI and xAI, which have also made classified AI deals with the US government. Anthropic was also among that list until it was blacklisted by the Pentagon for refusing...

··

Attack of the killer script kiddies

Last August, some of the best cybersecurity teams in the business gathered in Las Vegas to demonstrate the strength of their AI bug-finding systems at DARPA's Artificial Intelligence Cyber Challenge (AIxCC). The tools had scanned 54 million lines of actual software code that DARPA had injected with artificial flaws. The teams were capable enough to identify most of the artificial bugs, but their automated tools went beyond that - they found more than a dozen bugs that DARPA hadn't inserted at all. Even before the security earthquake that Anthropic delivered this month with Claude Mythos - the...

·

Anthropic hitting 40% enterprise share makes the "just add a fallback provider" advice weaker, not stronger

Menlo Ventures' enterprise survey put Anthropic at 40% of LLM spend, OpenAI at 27%. The takes I've seen are mostly about the leaderboard. The thing nobody's saying out loud: the standard agent-reliability advice ("don't depend on one provider, add a fallback") got harder to actually execute, not easier. When the split was closer to 50/30, both providers were realistic peers. You could run prod on one and have the other warm. Now most of us are running primarily on Claude — Sonnet for tool calls, Opus for harder stuff — and the "fallback" is a model we haven't tested against our actual prompt...

··

Anthropic Support is a joke

Rant on Anthropic Support I have two macbooks. One is an Air and the other is a Pro. On my Pro model the Claude Code stopped working inside the official anthropic application altogether. Whenever I would type a message, it would create a session w/ a summary of the name but wouldn't ever do anything. Just a blank screen. No error messages, nothing. I spent forever troubleshooting it, pulling logs, reinstalling, clearing cache, etc. to no avail Reached out to Anthropic, got their AI bot, once it couldn't solve my issue it forwarded my message to a human. It took over a month to get a...

··
30 matches