OpenAI has truly stepped up their game and released some great models in the last three months.
Reddit user praises OpenAI's recent model releases and suggests the company is recovering trust after prior controversies.
Search the full wire by company, model, lab, or keyword. Every story we have ever aggregated.
Reddit user praises OpenAI's recent model releases and suggests the company is recovering trust after prior controversies.
Variational autoencoder model for long-term customer revenue forecasting from sparse transaction data in non-contractual settings.
Mixed membership sub-Gaussian extension of Gaussian mixture models for multi-component observations in genetics and text mining.
Beloved wolf gripped the nation after burrowing out of the zoo.
Framework identifying demographic unfairness in speech recognition embeddings via error typification across speaker groups.
Concept drift detection in malware classification using decision tree rulesets on EMBER2024 dataset across temporal windows.
Federated learning (FL) is no longer a research curiosity—it’s a practical response to a hard constraint: the most valuable data is often the least movable.... Federated learning (FL) is no longer a research curiosity—it’s a practical response to a hard constraint: the most valuable data is often the least movable. Regulatory boundaries, data sovereignty rules, and organizational risk tolerance routinely prevent centralized aggregation. Meanwhile, sheer data gravity makes even permitted transfers slow, expensive, and fragile at scale. Source
Theoretical proof that replica threshold marks information-theoretic boundary for nonlinear quantum state moment estimation.
Markov chain analysis of Dante's Commedia reveals increasing graphemic memory from Inferno to Paradiso via V/C encoding.
Action-Conditioned World Models for cardiac dynamics using LeJEPA framework to detect pathological changes in physiological time-series.
We knew at some point Tim Cook would step down from his position as Apple's CEO. Over the last year, it has become increasingly obvious that John Ternus was his likely successor. The news this week was still a surprise, though - and this year's succession could lead to some important changes at the most influential company in tech. Verge subscribers, don't forget you get exclusive access to ad-free Vergecast wherever you get your podcasts. Head here. Not a subscriber? You can sign up here. On this episode of The Vergecast, David and Nilay are joined by Daring Fireball's John Gruber to talk ab...
LLM-powered rhetorical analysis of 100 YouTube transcripts on cow urine health claims, examining appeals to authority and conspiracy framing.
DeepSeek v4 Flash demonstrates strong tool-use accuracy and multi-tool calling in code editing tasks, but trades speed for reasoning depth.
NL2VC-60 dataset for verified code synthesis from natural language via Dafny formal verification of LLM-generated implementations.
LLM-as-Judge framework for math reasoning evaluation using semantic equivalence instead of symbolic comparison to handle diverse solution formats.
KL divergence benchmark comparing Gemma 4 and Qwen 3.6 with quantized KV cache compression (q8_0, q4_0).
Adaptive Head Budgeting reduces multi-head attention computation by dynamically deactivating heads based on task complexity and input requirements.
WassersteinGrad method explains neural network predictions in weather forecasting via gradient-based feature attribution for high-dimensional inputs.
Medical imaging models rely on nonrobust adversarial-vulnerable features for in-distribution accuracy, creating safety/robustness tradeoffs.
QuantClaw enables task-dependent mixed-precision quantization for autonomous agent systems to reduce costs while maintaining performance.
Reddit discussion on developing research taste: prioritizing problem selection and simple baselines over technical complexity in ML research.
SpikingBrain2.0 5B model uses Dual-Space Sparse Attention for efficient long-context inference with reduced computation overhead.
Bilevel optimization framework models adversarial co-evolution between malware detectors and RL-based adaptive attackers.
Surprise! StrictlyVC San Francisco, which will kick off this year’s events lineup for TechCrunch on April 30 at the Sentro Filipino Cultural Center, is getting a new addition to its increasingly stacked lineup of speakers. Uber CTO Praveen Neppalli Naga will join the rest of the lineup to discuss operating at scale in the age of AI.
Tim Cook plans to step down from his CEO role in September, handing the reins to hardware chief John Ternus. Ternus may be inheriting one of the most durable businesses in tech, but he’s also stepping into a very different ecosystemthan the one Cook spent decades shaping. The App Store’s 30% cut is under pressure, the behind-the-scenes power Apple once […]
HiLight trains lightweight Emphasis Actor to highlight pivotal evidence spans for frozen LLMs without compressing context.
SpectralFed and SpectralFuse use gradient von Neumann entropy for privacy-preserving client contribution estimation in federated learning.
Gated context projectors improve cross-stage coherence in autonomous driving VQA by reducing perception-planning inconsistencies by 42.6%.
SOLAR-RL bridges offline and online RL for training MLLM GUI agents on dynamic tasks, combining trajectory semantics with long-horizon learning.
Study evaluates whether natural-domain foundation models improve cardiac MRI reconstruction versus domain-specific models like BiomedCLIP.