All tags
Model: "qwen3.5-397b-a17b"
Claude Code Anniversary + Launches from: Qwen 3.5, Cursor Demos, Cognition Devin 2.2, Inception Mercury 2
qwen3.5-flash qwen3.5-35b-a3b qwen3.5-122b-a10b qwen3.5-27b qwen3.5-397b-a17b gpt-5.3-codex claude-code alibaba openai anthropic cursor huggingface model-architecture reinforcement-learning quantization context-windows agentic-ai api websockets software-ux enterprise-workflows model-deployment awnihannun andrew_n_carr justinlin610 unslothai terryyuezhuo haihaoshen 0xsero ali_tongyilab scaling01 gdb noahzweben _catwu
Alibaba launched the Qwen 3.5 Medium Model Series featuring models like Qwen3.5-Flash, Qwen3.5-35B-A3B (MoE), and Qwen3.5-122B-A10B (MoE) emphasizing efficiency over scale with innovations like 1M context and INT4 quantization. OpenAI released GPT-5.3-Codex via the Responses API with enhanced file input support and faster web socket-based throughput. Anthropic introduced Claude Code Remote Control enabling terminal session continuation from mobile and expanded enterprise workflow features. Cursor shifted UX to agent demo videos instead of diffs, highlighting new interaction modes.
not much happened today
claude-4.6 claude-opus-4.6 claude-sonnet-4.6 qwen-3.5 qwen3.5-397b-a17b glm-5 gemini-3.1-pro minimax-m2.5 anthropic alibaba scaling01 arena artificial-analysis benchmarking token-efficiency ai-agent-autonomy reinforcement-learning asynchronous-learning model-performance open-weights reasoning software-engineering agentic-engineering eshear theo omarsar0 grad62304977 scaling01
Anthropic released Claude Opus/Sonnet 4.6, showing a significant intelligence index jump but with increased token usage and cost. Anthropic also shared insights on AI agent autonomy, highlighting human-in-the-loop prevalence and software engineering tool calls. Alibaba launched Qwen 3.5 with discussions on reasoning efficiency and token bloat, plus open-sourced Qwen3.5-397B-A17B FP8 weights. The GLM-5 technical report introduced asynchronous agent reinforcement learning and compute-efficient techniques. Rumors about Gemini 3.1 Pro suggest longer reasoning capabilities, while MiniMax M2.5 appeared on community leaderboards. The community debates benchmark reliability and model performance nuances.
Qwen3.5-397B-A17B: the smallest Open-Opus class, very efficient model
qwen3.5-397b-a17b qwen3.5-plus qwen3-max qwen3-vl kimi alibaba openai deepseek z-ai minimax kimi unsloth ollama vllm native-multimodality spatial-intelligence sparse-moe long-context model-quantization model-architecture model-deployment inference-optimization apache-2.0-license pete_steinberger justinlin610
Alibaba released Qwen3.5-397B-A17B, an open-weight model featuring native multimodality, spatial intelligence, and a hybrid linear attention + sparse MoE architecture supporting 201 languages and long context windows up to 256K tokens. The model shows improvements over previous versions like Qwen3-Max and Qwen3-VL, with a sparsity ratio of about 4.3%. Community discussions highlighted the Gated Delta Networks enabling efficient inference despite large model size (~800GB BF16), with successful local runs on Apple Silicon using quantization techniques. The hosted API version, Qwen3.5-Plus, supports 1M context and integrates search and code interpreter features. This release follows other Chinese labs like Z.ai, Minimax, and Kimi in refreshing large models. The model is licensed under Apache-2.0 and is expected to be the last major release before DeepSeek v4. The news also notes Pete Steinberger joining OpenAI.