Vol. I · No. 19FRI, MAY 8, 2026
Archive

The Archive

Search the full wire by company, model, lab, or keyword. Every story we have ever aggregated.

why does GPT 5.5 have a restraining order against "Raccoons," "Goblins," and "Pigeons"?

[why does GPT 5.5 have a restraining order against \\"Raccoons,\\" \\"Goblins,\\" and \\"Pigeons\\"?](https://preview.redd.it/5trpwlqf8zxg1.png?width=771&format=png&auto=webp&s=ca33e02b4a3c74fa3fc933ec1192059dfbdbc068) I just saw the full system prompt leak for 5.5 (April 23rd release). Most of it is standard agentic stuff, but Instruction #140 is genuinely insane. It explicitly forbids the model from talking about: "goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals." Why the specific hate for pigeons and raccoons? Is this a data-poisoning protection? Or did...

··

New study finds: bigger AIs = more miserable. Smaller models are actually happier. Ignorance is bliss for AIs too.

I don't know whether we should care about this, but bigger models tend to be less "happy" overall. The definition of "happy" is based on something they call AI Wellbeing Index. Basically they ran 500 realistic conversations (the kind we actually have with these models every day) and measured what percentage of them left the AI in a “confidently negative” state. Lower percentage = happier AI. I guess wisdom is a heavy burden - lol . Across different families, the larger versions usually have a higher percentage of "negative experiences" than their smaller siblings. The paper says t...

··

Mistral Medium Is On The Way

Mistral Medium incoming with 128B parameters; speculation on dense vs. MoE architecture based on Small model naming.

··

A paradox of AI fluency

Analysis of 27K WildChat transcripts reveals fluent AI users iterate collaboratively while novices adopt passive stance, paradoxically experiencing higher failure rates.

·

Visualizing Loss Landscapes of Neural Networks [P]

Hey r/MachineLearning, Visualizing the loss landscape of a neural network is notoriously tricky since we can't naturally comprehend million-dimensional spaces. We often rely on basic 2D contour analogies, which don't always capture the true geometry of the space or the sharpness of local minima. I built an interactive browser experiment [https://www.hackerstreak.com/articles/visualize-loss-landscape/](https://www.hackerstreak.com/articles/visualize-loss-landscape/) to help build better intuitions for this. It maps how different optimizers navigate these spaces and lets you actually visualiz...

··
30 stories