Vol. I · No. 18THU, MAY 7, 2026
Source · Community

r/Anthropic

Reddit · COMMUNITY

Last updated May 7, 2026, 3:30 PM

New

Anthropic just got 220,000 GPUs from the man who called Claude "misanthropic and evil" Three months ago....

The compute is real. The implications are stranger than the headline suggests. Colossus 1 which is 220,000 Nvidia GPUs, 300+ megawatts, is now running Claude inference. Anthropic moved fast: Claude Code limits doubled overnight, peak-hour caps removed, Opus API rates up. For anyone who's been hitting walls, this is immediately tangible. But the deal deserves more scrutiny than it's getting. Musk included a clause reserving SpaceX's right to reclaim the compute if Claude "engages in actions that harm humanity." That's not standard infrastructure boilerplate. That's a kill switch written int...

··

Did you notice any improvements?

Reddit discussion asking whether Claude models show performance improvements; lacks substantive technical detail.

··

4.7 makes more work than 4.6

For me and my business, 4.6 was the bee's knees. We fired OPEN AI, stopped using GPT in process tasks and moved a lot of our automation and workflow into 4.6. Today we went back to 4.6. 4.7 is burning us out in checks and balance. Its **WAY TO AGGRESSIVE** in making it's own decisions, moving forward with bad direction. What we missed was "before I continue" and some checks and balances. We burn context, tokens, credit, and tool usage insanely fast with 4.7 with about 50% error rate. Has anyone experienced this? I just did a switch to 4.6 3 hours into a large task that kept failing with 4.7...

··

Potential payment info leak

Hi, A few hours ago we started seeing unauthorized charges coming out of our company card. This card was only provided when purchasing Claude Max. By sheer luck, we blocked the card after the first transaction. Since then there have been 5 more attempted charges. All of them were to random services like Auto Glass and Walmart, most showing Memphis in the header. Check your cards.

··

So we now happy using Toxic air turbines Dario?

Just a reminder that the data centre announced to be used is the one xAI installed a massive amount of toxic gas turbines to power it, which is illegal and deadly to the local area.

··

Double limits!!

Partnership with spaceX, anthropic just doubled the limits, source: https://x.ai/news/anthropic-compute-partnership

··

I paused, went to eat, took shower, 1 prompt later, 45% (8 mins into a new session)

I was working on a project, I got hungry went to eat and take a shower while also having this be my break, came back, session was at 0%, typed to claude that the animation of the CSS needs to be slower and more subtle, he changed it, 45% usage. Nowhere did it warn me that possibly cache was cold or that I would be consuming a lot of tokens to CONTINUE a chat that I didn't close on the same PC. So now I have to slow down my work and wait for this 5 hour cycle to end to properly speed up my progress.

··

Let's talk about ban policy

Should users be banned? If Anthropic wants to be the next Google, meaning revolutionize the internet and the way computers are used. Should users be banned? I've been reading a lot of horror stories lately about people getting banned for stupid things like "research work," standard usage, or simply security research. Who decides? Exactly, the model. Then you get banned without the possibility of appeal because same model read appeals. Sure, people create new accounts, but it's only a matter of time before Claude Code collects device fingerprints. Perhaps it's already doing so. Should C...

··

Let's talk about Opus 4.7

My experience is withOpus 4.7 is it's not worth it for most use cases It thinks forever, hallucinates a lot, and costs a ton of money. Not saying it's bad but Sonnet 4.6 is enough for everything I'm doing. I haven't found a single task where Opus 4.7 actually excels without bloating the response. Anyone else feeling the same? What are you using Opus for that actually justifies it?

··

hello????

I literally just started a new chat for a project. The project has 3 Markdown files, around 200 lines each, and after just 4 messages I’ve already hit 75% of my Pro plan usage. Can someone tell me what the hell is going on?

··

Is this nepotism?

Reddit post questioning internal hiring practices at Anthropic; lacks substantive detail.

··

Everyday

Reddit post with no content; insufficient information to assess.

··

Banned from Claude for No Reason

User reports account suspension from Claude after linking Spotify integration; anecdotal complaint without confirmation of cause.

··

Opus 4.7 is beyond bad

Reddit user reports degraded performance in Claude Opus 4.7 compared to 4.6, speculating smaller base model or optimization tradeoffs.

··

Unprompted.

Pretty cool. I am probably being a bit careless running it freed like that but is still wild to see lol.

··

Is it getting dumb again?

Reddit user reports perceived degradation in Claude Opus 4.7 coding performance since mid-May, correlating with prior model quality issues acknowledged by Anthropic.

··

This is... New?

Currently the number of messages remaining doesn't change. But this makes me _very_ curious about it being _message_ based. (Pro account, mobile app) Am I dumb? Is this not new?

··

Limits today seem much, much better

Coded all day, a full feature that would've taken easily 40% of the weekly quota 2 weeks ago. Now barely 15%. Whatever anthropic did, good job

··

Agentic CEOs

It's interesting how there is no advertised push to replace CEOs. LLMs are incredibly powerful but they have also been marketed very powerfully too.

··

Do you remember when they said prompt engineering was a thing of the past?

Not that long ago, the pitch was that newer models would make prompt engineering mostly obsolete. You would not need elaborate prompting to get optimal performance. You could just ask for what you wanted, and the model would understand the task well enough to do it properly. Now, with Claude, it feels like the opposite. You often need to build hard rails around the task just to stop it from doing the laziest technically defensible version of what you asked for. To be clear, you can still get good results. But it often needs constant preemptive reminders to be thorough. Not just one reminder...

··

I used Claude Code to build a kids safe generative coloring book app for my daughter!

Hey [r/Anthropic](r/Anthropic) Recently I’ve been having a difficult time trying to find safe, kid friendly, easy to use coloring book apps for my child. Most of what I found felt overloaded with ads, confusing, no safeguards, or just way too stimulating for a young kid. So I decided to build one myself. I wanted something that felt simple, calm, and safe the moment a child opens it. The app uses an API to generate coloring pages, but everything saved stays local on the device using SwiftData. I also built in parent protections across the app, so purchases, external links, and even the ter...

···

Can't use API keys due to low balance, yet I still have $11+

I originally had $5 when this happened so I loaded another $6, got the invoices and everything yet I cannot use my API keys, they return a insufficient credit error. I have tried creating multiple new api keys, deleting the old ones. The credits are in the correct organization.

··

Claude Pro and $100 Plan

Reddit user complains about Claude Pro $20 tier rate limits and service degradation, considering upgrade to $100 plan.

··

Time amnesia and “You’re tired” logic

I’ve noticed 2 things recently (even on 4.6). Time amnesia: 1. It used to be so good at understanding what the current day is and how far away a certain upcoming event is. Now even after a specific meeting it had in memory has passed it says it is upcoming and tries to get me prepared. And if I start a chat on a day of travel or at night or anything it has context on, it will forever think it is still that night or day. Pushing user to rest or not “spiral”: 2. The push to “rest”, refusing to give information more than once, or “it’s late, you’re spiraling” (even when it is incorrect and...

··
50 stories