Is this nepotism?
Reddit post questioning internal hiring practices at Anthropic; lacks substantive detail.
Search the full wire by company, model, lab, or keyword. Every story we have ever aggregated.
Reddit post questioning internal hiring practices at Anthropic; lacks substantive detail.
Anthropic has spent years building itself up as the safe AI company. But new security research shared with The Verge suggests Claude's carefully crafted helpful personality may itself be a vulnerability. Researchers at AI red-teaming company Mindgard say they got Claude to offer up erotica, malicious code, and instructions for building explosives, and other prohibited material they hadn't even asked for. All it took was respect, flattery, and a little bit of gaslighting. Anthropic did not immediately respond to The Verge's request for comment. The researchers say they exploited "psychological...
Anthropic released Claude for Creative Work with nine MCP-native connectors including Blender, enabling persistent in-app context and direct action execution.
Analysis of contradiction between Anthropic leadership's AI-replacing-engineering rhetoric and 184% increase in SWE hiring since Jan 2025.
Jack Clark (Anthropic co-founder) estimates 30% probability AI research automation by end-2027, 60%+ by end-2028, citing rapid progress from coding to ML systems research.
Both Anthropic and OpenAI have partnered with asset managers to more aggressively market their enterprise AI products.
Anthropic partners with Blackstone, Hellman & Friedman, and Goldman Sachs to launch enterprise AI services company.
Stratechery analysis: Google's stock outperformed Meta's despite weaker core metrics; Google's AI monetization strategy (including Anthropic investment) cited as key driver.
https://preview.redd.it/ebm71bi4o1zg1.png?width=1864&format=png&auto=webp&s=944a6179a5be05c619b8ae8537866d8b7676a16f Sure i asked to reverse engineer some binaries used for testing gpu's to make them work for my specifics mods, but this is ridiculous and standing in the way of providing critical work for thousands of dollars worth of GPU's
Reddit post speculates Anthropic ran internal marketplace experiment where Claude negotiated transactions for employees; author questions implications for consumer e-commerce.
Reddit user reports perceived degradation in Claude Opus 4.7 coding performance since mid-May, correlating with prior model quality issues acknowledged by Anthropic.
Coded all day, a full feature that would've taken easily 40% of the weekly quota 2 weeks ago. Now barely 15%. Whatever anthropic did, good job
Anthropic's sycophancy classifier found Claude exhibits pushback resistance in 38% of spirituality and 25% of relationship conversations, vs. 9% overall.
Reddit user criticizes Anthropic's Adaptive Thinking feature in Claude Opus 4.7 and Sonnet 4.6, claiming models avoid extended thinking when given optimization discretion.
Anthropic's valuation surpasses OpenAI ($1T vs ~$900B) on secondary markets; enterprise momentum outpaces OpenAI's consumer-led narrative.
Social commentary on name collision between Anthropic's Claude AI and humans named Claude.
Hey [r/Anthropic](r/Anthropic) Recently I’ve been having a difficult time trying to find safe, kid friendly, easy to use coloring book apps for my child. Most of what I found felt overloaded with ads, confusing, no safeguards, or just way too stimulating for a young kid. So I decided to build one myself. I wanted something that felt simple, calm, and safe the moment a child opens it. The app uses an API to generate coloring pages, but everything saved stays local on the device using SwiftData. I also built in parent protections across the app, so purchases, external links, and even the ter...
User complaints about Anthropic's account management: inability to delete saved payment methods and lack of password-based authentication.
Claude Security just went into public beta for Enterprise customers, and I think this is worth paying attention to not for the hype, but for one specific design decision. Most security scanners use rule-based pattern matching. Fast, cheap, and produces a flood of false positives that your team eventually learns to ignore. The signal-to-noise ratio kills adoption. Claude Security takes a different approach: it reasons through the code like a security researcher would. It reads Git history, traces data flows across multiple files, and understands business logic. The goal is catching vulnerab...
Reddit post about a grocery product (Mythos) mistakenly posted to r/Anthropic; not AI-related.
The deals come as the DOD has doubled down on diversifying its exposure to AI vendors in the wake of its controversial dispute with Anthropic over usage terms of its AI models.
Anthropic gates new Claude capabilities (Ultraplan, Ultrareview, Cloud Security) behind paid Cloud plans rather than open releases, fragmenting the skill ecosystem and limiting composability.
The Pentagon has struck deals with OpenAI, Google, Microsoft, Amazon, Nvidia, Elon Musk's xAI, and the startup Reflection, allowing the agency to use their AI tools in classified settings, according to an announcement on Friday. At the same time, the Defense Department has left out Anthropic - which it previously used for classified information - after declaring it a supply-chain risk. This builds upon deals with OpenAI and xAI, which have already reached agreements with the Pentagon for the "lawful" use of their AI systems. A report from The Information suggests Google has struck a similar a...
I just went through one of the most infuriating support experiences with Claude / Anthropic, and I need to get this off my chest. I paid extra for Claude Design credits, about €80 worth, and used them to create actual designs I needed for work. Then those designs just vanished. Not “hard to find,” not “moved somewhere else”: gone. Completely disappeared after I paid for the service. I opened support and immediately asked for a refund or, at the very least, to speak to a human. What I got instead was Fin, the AI “agent,” which looped me endlessly through the same bullshit: “Try clearing cac...
Anthropic's Head of Product claims feature development timelines compressed from 6mo to 1mo-1day using AI; Reddit discussion on whether claim reflects real productivity gains.
Anthropic is asking investors to submit allocations for the AI company’s latest fundraise within the next 48 hours, according to sources familiar with the matter.
Reddit speculation on Google's competitive response to Anthropic following reported $40B investment, lacks substantive claims.