All tags
Model: "gpt-5.2-pro"
not much happened today
claude gpt-5.2-pro dgm-h rllm anthropic meta-ai-fair agent-frameworks workflow-automation multi-agent-systems reinforcement-learning reward-models self-improving-agents benchmark-generation operational-efficiency closed-loop-feedback jenny_zhang jase_weston mikhail_parakhin jeremyphoward
Anthropic introduced Claude Cowork and Claude Code enabling desktop control of mouse, keyboard, and screen in a macOS research preview, expanding agent capabilities beyond APIs and browsers. The agent ecosystem is evolving towards long-running, parallel, tool-rich workflows with projects like Hermes Agent, T3 Code, Command Center, and Parchi enhancing multi-agent orchestration and autonomous task management. Operational challenges such as fragility and inefficiency in subagents, including GPT-5.2 Pro and Claude browser/computer use, highlight the need for closed-loop feedback systems. Research from Meta AI advances self-improving agents with Hyperagents / DGM-H enabling meta-level procedural improvements, and unifies reinforcement learning post-training with RLLM (RL + LM-as-RM) to improve reward modeling across task types. Additionally, WebArena-Infinity drastically reduces browser environment construction costs, accelerating benchmark and environment generation.
not much happened today
claude-3 codex gemini gpt-5.2-pro anthropic openai google sakana-ai cursor baseten epoch-ai-research deepmind benchmarking reasoning continual-learning reinforcement-learning model-performance agentic-ai security model-training sama fchollet shane_legg demishassabis
Anthropic launches "Claude in Excel Pro" with enhanced features. OpenAI reveals upcoming Codex agent loop and cybersecurity measures. Google boosts Gemini App quotas and partners with Sakana AI for advanced AI Scientist projects in Japan. Cursor introduces Agent Skills for dynamic context focus. GPT-5.2 Pro achieves 31% on FrontierMath Tier 4, showing significant benchmark progress. Baseten raises $300M at a $5B valuation targeting high-performance inference. Discussions highlight math benchmarks as indicators of AI capability, uneven AGI progress, and the importance of reasoning and continual learning as future frontiers. Notable figures include Sam Altman, François Chollet, Shane Legg, and Demis Hassabis.