Neural Decision-Propagation for Answer Set Programming
Neural Decision-Propagation proposes differentiable stable model computation for neuro-symbolic AI, replacing classical ASP solvers to improve scalability.
Search the full wire by company, model, lab, or keyword. Every story we have ever aggregated.
Neural Decision-Propagation proposes differentiable stable model computation for neuro-symbolic AI, replacing classical ASP solvers to improve scalability.
Calibrated Size Ratio (CSR) replaces Expected Calibration Error for confidence calibration, addressing ECE's inability to detect overconfidence risk.
AlphaEvolve uses LLM-guided evolutionary search to discover closed-form solutions for radar power allocation in multi-target tracking.
Khala explores coarse-to-fine music generation via 64-layer acoustic token hierarchies and residual vector quantization instead of hybrid diffusion pipelines.
It's interesting how there is no advertised push to replace CEOs. LLMs are incredibly powerful but they have also been marketed very powerfully too.
DataEvolver implements closed-loop agent-driven visual data generation and refinement for image editing, supporting masks, depth, poses, and trajectory artifacts.
Developer discusses building a local Solidity LM with chain-of-thought and tool-calling; seeks alternatives to SOTA models for smart contract security and vulnerability analysis.
Zero-shot UAV navigation combines Potential-Based Reward Shaping with Control Lyapunov/Barrier Functions to balance mission success, time, and safety.
Momentum integrates runtime procedural content generation and autonomous agent evaluation in endless-runner gameplay to assess generated terrain balance and solvability.
Theoretical analysis of adversarial imitation learning under general function approximation bridges gap between AIL theory (tabular/linear) and neural network practice.
ML approach to 5G channel estimation reduces pilot overhead via data-driven methods; telecom infrastructure application.
Semi-supervised two-sample test using covariate information with asymptotic normality guarantees; statistical methodology.
Anticipation-VLA model adaptively generates subgoals for long-horizon robotic tasks via vision-language models.
Defines 'Compliance Gap': AI systems verbally accept constraints but violate them in execution; audits instruction-following.
Relevance propagation method at inference reduces hallucinations in multimodal LLMs by rebalancing modality utilization.
Distributional Causal Mediation Analysis uses conditional generative models to estimate treatment effects on outcome distributions.
User shares personal image generation samples from 2021, noting improvement trajectory in AI image synthesis.
Defense mechanism for multi-agent systems against infectious jailbreak attacks via foresight-guided local recovery.
User documents Claude Code CLI behavior on Windows 11 with Opus 4.7 when system dependencies are missing.
Online causal framework for auto-bidding in second-price auctions models marginal value vs. realized revenue.
Linear dueling bandits algorithm handles delayed feedback, adversarial corruption, and post-serving context simultaneously.
Iterated negotiation benchmark tests LLM agents' ability to repair grounding failures in dynamic multi-turn interaction.
Decoupled exploration-commitment paradigm reduces hallucinations in long-form reasoning by fine-grained control over information selection across reasoning steps.
NH-CROP framework prices language data assets under cost uncertainty with information-acquisition gates for NLP tasks.
OpenClaw agentic-AI runtime fails to catch four critical safety failures (gate-bypass, audit-forgery, host failure, wrong-target) in production deployment.
Geometric unlearning method removes specific content from LLMs without full training corpus access, balancing privacy and model utility.
GEASS steering mitigates object hallucination in vision-language models by asymmetric caption weighting without retraining.
EGAD entropy-guided knowledge distillation improves token-level transfer by weighting per-token importance in student model training.
GFlowNet training stability improved via loss-to-TV bounds that provide probabilistic guarantees against mode collapse.
LLMs exhibit genre-dependent credibility assessment bias, misclassifying entertainment news as fake more often than hard news.