An update on our election safeguards
Anthropic outlines safeguards for Claude during US midterms and global elections to mitigate disinformation and manipulation risks.
Search the full wire by company, model, lab, or keyword. Every story we have ever aggregated.
Anthropic outlines safeguards for Claude during US midterms and global elections to mitigate disinformation and manipulation risks.
Chinese AI company DeepSeek released a preview of its hotly anticipated next-generation AI model V4 on Friday, saying that the open-source model can compete with leading closed-source systems from US rivals including Anthropic, Google, and OpenAI. DeepSeek says V4 marks a major improvement over prior models, especially in coding, a capability that has become central to AI agents and helped drive the success of tools like ChatGPT Codex and Claude Code. The release is also a milestone for China's chip industry, with DeepSeek explicitly highlighting compatibility with domestic Huawei technology....
Anthropic and NEC partner to develop AI engineering talent in Japan, expanding local AI workforce capacity.
Anthropic postmortem: three Claude Code harness bugs, not model quality, caused two-month regression in output quality.
I benchmarked and compared Claude Opus 4.5 vs Opus 4.6 vs Opus 4.7 vs Sonnet 4.6 testing effort levels from low, medium, high, xhigh, max as curious about token usage/costs and performance within Claude Code https://ai.georgeliu.com/p/tested-claude-ai-llm-models-effort Hope folks find this useful. The test was done with Claude Code v2.1.117 which is apparently the fixed versions from Anthropic's post-mortem announcement.
Anthropic restores Claude Code access to Claude Pro subscribers after temporary removal.
Claude users can access more apps with Anthropic's AI now thanks to new connectors for everything from hiking to grocery shopping. Anthropic already supported connecting numerous work-related apps to Claude, like Microsoft apps, but this expansion focuses on personal apps like Audible, Spotify, Uber, AllTrails, TripAdvisor, Instacart, TurboTax, and others. Some of these apps, such as Spotify, already have similar connectors in OpenAI's ChatGPT. Once an app is connected, Claude will suggest relevant connected apps directly in your conversations, like using AllTrails for hike recommendations. A...
Reddit user speculates OpenAI reached AGI and will outpace Anthropic; compares Codex and Claude Code features.
I’ve been trying to make the payment on my 2nd account and it’s rejecting the payment. Tried it 8 times since yesterday. All the payment info is correct. The same card is working on my 1st account.
Anthropic's tightly controlled rollout of Claude Mythos has taken an awkward turn. After spending weeks insisting the AI model is so capable at cybersecurity that it is too dangerous to release publicly, it appears the model fell into the wrong hands anyway. According to Bloomberg, a "small group of unauthorized users" has had access to Mythos - whose existence was first revealed in a leak - since the day Anthropic announced plans to offer it to a select group of companies for testing. Anthropic says it is investigating. That's a rough look for a company that has built its brand on taking AI ...
User observes API rate-limit reset time shifted from Thursday to Saturday; speculates new model launch.
The AI model that Anthropic billed as too dangerous to release has reportedly been accessed by an unauthorized third party, and the incident raises concerns about the future of cybersecurity. The Mythos model was reportedly accessed by a handful of users in a private Discord chat on the day it was announced publicly, Bloomberg reported. Earlier this month, the group was able to access the program in part because one of the members of the group is a third party contractor for Anthropic, according to Bloomberg. Using this access, the group was able to guess where the model was located based ...
He said it on the [Dwarkesh Podcast ](https://mrkt30.com/anthropic-mythos-triggers-chinas-ai-arms-frenzy/)this week and I have not been able to stop thinking about it. His argument was not that China is not a threat. It was that cutting them off and treating them as an enemy is probably not the smartest long term play. His actual words were that victimising them and turning them into an enemy likely is not the best answer. The context here is Huawei targeting 750,000 AI chip shipments this year. It is nowhere near Nvidia's compute but the direction of travel is clear. And if DeepSeek ends u...
Earlier this month, millions of OpenClaw users woke up to a sweeping mandate: The viral AI agent tool, which this year took the worldwide tech industry by storm, had been severely restricted by Anthropic. Anthropic, like other leading AI labs, was under immense pressure to lessen the strain on its systems and start turning a profit. So if the users wanted its Claude AI to power their popular agents, they'd have to start paying handsomely for the privilege. "Our subscriptions weren't built for the usage patterns of these third-party tools," wrote Boris Cherny, head of Claude Code, on X. "We wa...
So we all got that extra credit a few weeks back, right? $150 in my case. I turned off extra usage until I needed it, which is today. Tried to call the API - no balance. So the extra usage is only via claude code. Thanks for that, Anthropic. EDIT: So I added $25 in API usage, and the text against that balance says: "Your credit balance will be consumed with **API, Claude Code** and Workbench usage. " Jesus wept, Anthropic. Do you seriously ever get someone to sit and think about what you're doing?
[https://www.businessinsider.com/anthropic-trillion-dollar-valuation-on-secondary-markets-2026](https://www.businessinsider.com/anthropic-trillion-dollar-valuation-on-secondary-markets-2026)
I don’t get what the fuss is about Mythos is, from the reporting I’ve seen…. Mythos found a critical vulnerability in OpenBSD which is known for robust security, which went unnoticed by humans for 27 years. So what? Sure, maybe\* it was a super obscure bug to find \*had to have been very obscure to avoid 27 years of reviews by humans I repeat - so what? Anthropic - the company with the models used for the majority of serious coding etc, used all the data it had access to, and presumably a lot of compute, to train a computer to be able to find bugs made by humans that humans missed when ...
Untenable demand has Anthropic exploring new approaches to rationing its service.
Several US federal agencies are taking up Anthropic's new cybersecurity model to find vulnerabilities, but one is reportedly not getting in on the action: the nation's central cybersecurity coordinator. On Tuesday, Axios reported that the Cybersecurity and Infrastructure Security Agency (CISA) didn't have access to Mythos Preview, which Anthropic has touted as a powerful tool for finding and patching security vulnerabilities. Meanwhile, other agencies like Commerce Department and National Security Agency (NSA) are reportedly using the model, and President Donald Trump's administration has bee...
Anthropic suspended ~110 users at agricultural tech company without warning; users report lack of transparency in account enforcement.
At one point people thought of you as better than OpenAI and Google. We know AI companies are losing money. \- Just say, "We don't release Mythos because it'd be too expensive." \- Just say "We're going to increase the prices of Pro and Max because we're running out of money" ... all this under-the-radar marketing firm BS just means that you've decided to hemorrhage social capital as well as financial capital. Why would you want to do this?
Anthropic's Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a "small group of unauthorized users," Bloomberg reports. An unnamed member of the group, identified only as "a third-party contractor for Anthropic," told the publication that members of a private online forum got into Mythos via a mix of tactics, utilizing the contractor's access and "commonly used internet sleuthing tools." The Claude Mythos Preview is a new general-purpose model that's capable of identifying and exploiting vulnerabilities "in every m...
Autistic user testimonial about using Claude Co-work for organizing 20 years of personal creative systems and documents.
Anthropic briefly moved Claude Code from $20 Pro to $100+ Max tier, then reverted; pricing confusion around feature tiers.
Anthropic told TechCrunch it is investigating the claims, but maintains that there is no evidence that its systems have been impacted.
With an IPO looming for Elon Musk's SpaceX / xAI / X combo platter of companies, SpaceX has announced an odd arrangement to either acquire the automated programming platform Cursor for $60 billion or pay a fee of $10 billion. Buying this startup that's focused on AI coding could help xAI's tools compete with market leader Anthropic, as well as the other competitors. A report by The Information this week said Sergey Brin has directed Google's "strike team" to help its agentic AI tools catch up, while Sam Altman reportedly declared a "code red" at OpenAI last year before shutting down Sora to f...
Anthropic's unreleased Mythos model reportedly accessed by unauthorized users, raising security and access control concerns.
User reports Claude Code feature removed from Anthropic Pro plan pricing page.
CTO says new AI model is "every bit as capable" as world's best security researchers.