All tags
Model: "glm-5"
MiniMax 2.7: GLM-5 at 1/3 cost SOTA Open Model
minimax-m2.7 sonnet-4.6 glm-5 mimo-v2-pro mamba-3 qwen-3.5 kimi-k2.5 gpt-5.4-mini minimax xiaomi artificial-analysis ollama trae yupp openrouter vercel zo opencode kilocode cartesia self-evolving-agents reasoning cost-efficiency token-efficiency hybrid-architecture harness-engineering agent-harnesses skills memory-optimization architecture feedback-loops api inference execution-environment
MiniMax M2.7 is the headline model release, described as a "self-evolving agent" with strong performance metrics including 56.22% on SWE-Pro, 57.0% on Terminal Bench 2, and parity with Sonnet 4.6. It features recursive self-improvement in skills, memory, and architecture. Artificial Analysis places M2.7 on the cost/performance frontier with an Intelligence Index score of 50, matching GLM-5 (Reasoning) but at a fraction of the cost. Distribution is available via platforms like Ollama cloud and OpenRouter. Xiaomi’s MiMo-V2-Pro is noted as a serious Chinese API-only reasoning model with a score of 49 on the Intelligence Index and favorable token efficiency. Cartesia’s Mamba-3 is highlighted as an SSM optimized for inference-heavy use, with early reactions focusing on hybrid transformer architectures like Qwen3.5 and Kimi Linear. The report emphasizes a shift from prompting to harness engineering, where the execution environment and agent harnesses, including skills and MCP, are becoming key differentiators in AI system design. This includes discussions on tools, repo legibility, constraints, and feedback loops, with mentions of DSPy and GPT-5.4 mini as important components in this evolving landscape.
not much happened today
opus-4.6 glm-5 anthropic ibm perplexity-ai llamaindex deepseek google-chrome persistent-memory agent-infrastructure cross-device-synchronization long-context sparse-attention inference-optimization computer-architecture task-completion systems-performance pamelafox tadasayy llama_index bromann dair_ai omarsar0 abxxai teknuim bcherny kimmonismus _catwu alexalbert__ realyushibai
MCP tools remain relevant for deterministic APIs despite ergonomic criticisms, with new web MCP support in Chrome v146 enabling continuous browsing agents. Persistent memory is emerging as a key differentiator for agents, with IBM improving task completion rates and multi-agent memory framed as a computer architecture challenge. Agent UX is evolving towards always-on, cross-device operation, exemplified by Perplexity Computer on iOS and Claude Code session management. Anthropic released Opus 4.6 1M context as default with no extra long-context API charges, achieving 78.3% on MRCR v2 at 1M tokens. Sparse attention optimizations like IndexCache in DeepSeek Sparse Attention yield significant speedups on large models with minimal code changes.
not much happened today
claude-4.6 claude-opus-4.6 claude-sonnet-4.6 qwen-3.5 qwen3.5-397b-a17b glm-5 gemini-3.1-pro minimax-m2.5 anthropic alibaba scaling01 arena artificial-analysis benchmarking token-efficiency ai-agent-autonomy reinforcement-learning asynchronous-learning model-performance open-weights reasoning software-engineering agentic-engineering eshear theo omarsar0 grad62304977 scaling01
Anthropic released Claude Opus/Sonnet 4.6, showing a significant intelligence index jump but with increased token usage and cost. Anthropic also shared insights on AI agent autonomy, highlighting human-in-the-loop prevalence and software engineering tool calls. Alibaba launched Qwen 3.5 with discussions on reasoning efficiency and token bloat, plus open-sourced Qwen3.5-397B-A17B FP8 weights. The GLM-5 technical report introduced asynchronous agent reinforcement learning and compute-efficient techniques. Rumors about Gemini 3.1 Pro suggest longer reasoning capabilities, while MiniMax M2.5 appeared on community leaderboards. The community debates benchmark reliability and model performance nuances.
MiniMax-M2.5: SOTA coding, search, toolcalls, $1/hour
minimax-m2.5 glm-5 minimax-ai togethercompute huggingface intel wandb reinforcement-learning agent-based-models model-quantization benchmarking model-efficiency multi-turn-dialogue infrastructure-optimization cost-efficiency on-device-ai
MiniMax-M2.5 is now open source, featuring an "agent-native" reinforcement learning framework called Forge trained across 200k+ RL environments for coding, tool use, and workflows. It boasts strong benchmark scores like 80.2% SWE-Bench Verified and emphasizes cost-efficiency with claims like "$1 per hour at 100 tps" and good on-device performance. The Forge RL system uses multi-level prefix caching and high rollout compute share (~60%) to generate millions of trajectories daily. Independent reviews note improved stability and multi-turn viability but high token usage. The ecosystem rapidly adopted MiniMax-M2.5 with quantized releases including 2-bit GGUF and INT4 formats. Meanwhile, Together markets GLM-5 as a leading open-source model for long-horizon agents with 77.8% SWE-Bench Verified and MoE efficiency using DeepSeek Sparse Attention.
Z.ai GLM-5: New SOTA Open Weights LLM
glm-5 glm-4.5 kimi-k2.5 zhipu-ai openrouter modal deepinfra ollama qoder vercel deepseek-sparse-attention long-context model-scaling pretraining benchmarking office-productivity context-window model-deployment cost-efficiency
Zhipu AI launched GLM-5, an Opus-class model scaling from 355B to 744B parameters with DeepSeek Sparse Attention integration for cost-efficient long-context serving. GLM-5 achieves SOTA on BrowseComp and leads on Vending Bench 2, focusing on office productivity tasks and surpassing Kimi K2.5 on the GDPVal-AA benchmark. Despite broad availability on platforms like OpenRouter, Modal, DeepInfra, and Ollama Cloud, GLM-5 faces compute constraints impacting rollout and pricing. The model supports up to 200K context length and 128K max output tokens.