Vol. I · No. 25THU, MAY 14, 2026
Archive

The Archive

Search the full wire by company, model, lab, or keyword. Every story we have ever aggregated.

DS4-Flash vs Qwen3.6

Reddit comparison thread between DS4-Flash and Qwen3.6 models lacking substantive analysis or benchmark data.

··

China’s DeepSeek previews new AI model a year after jolting US rivals

Chinese AI company DeepSeek released a preview of its hotly anticipated next-generation AI model V4 on Friday, saying that the open-source model can compete with leading closed-source systems from US rivals including Anthropic, Google, and OpenAI. DeepSeek says V4 marks a major improvement over prior models, especially in coding, a capability that has become central to AI agents and helped drive the success of tools like ChatGPT Codex and Claude Code. The release is also a milestone for China's chip industry, with DeepSeek explicitly highlighting compatibility with domestic Huawei technology....

·

Prestigious photo contest answers ‘what is a photo?’

The three finalists for the World Press Photo of the year. | Image: World Press Photo We love to muse over how "real" photography is defined here at The Verge now that generative AI is so prolific, and the World Press Photo competition might have the answer. The prestigious award celebrates the best of photojournalism, where capturing reality is paramount. The winning entry for 2026 - "Separated by ICE," captured by photojournalist Carol Guzy - was announced yesterday. The harrowing photograph shows children clinging to their father after an immigration hearing. The photo had to abide by spec...

·

How nosy 🧐

Reddit post title with no content; insufficient information to assess.

··

Ok dude

You didn't have to bring my mother into this.

··

Big model feel with GPT 5.5

Reddit user argues GPT 5.5 feels more intuitive despite lower-than-expected benchmark gains, citing improved argument coverage.

··

Claude + Codex = Excellence

I have a 20x Claude account and have been using Opus 4.7 exclusively for all code. I noticed even after asking multiple times to do code review, Opus would still not get there 100%. Here is what I did: 1. Installed Codex cli and ran it in a Tmux session 2. Claude created PR for Codex to review 3. Claude pinged Codex via shell so I can see the Codex thinking and approve any file permission. Claude set a wake up window. 4. Codex reviewed and updated comments in PR. 5. Claude woke up and validated the comments before editing code. Surprisingly Claude missed a lot of things...

··

Deepseek v4 people

Reddit discussion thread about Deepseek v4; lacks substantive detail or official announcement.

··

Millisecond Converter

Simon Willison releases a utility tool to convert millisecond durations to human-readable time formats.

·

DeepSeek V4 Benchmarks!

DeepSeek V4 benchmark results released; comparative performance data on frontier model capability.

··

Opus 4.7 is weird

Reddit user reports subjective quality regression in Claude Opus 4.7 compared to 4.5, citing reduced intuition and increased need for explicit guidance.

··

It's a big one

Simon Willison's newsletter includes a new chapter on Agentic Engineering Patterns plus curated links and blog posts.

·
30 stories