All tags
Person: "xlr8harder"
not much happened today
trinity-large-thinking glm-5v-turbo falcon-perception qwen-3.5 claude-4.6-opus claude-sonnet-4.5 arcee z-ai tii anthropic h-company open-weights agentic-performance vision multimodality transformer-architecture early-fusion ocr gui-navigation context-compression tooling feature-flags production-ablations task-budget-management streaming modular-architecture mark_mcquade latkins willccbb xlr8harder natolambert craig_hewitt zhihu_frontier
Arcee’s Trinity-Large-Thinking was released with open weights under Apache 2.0, featuring a 400B total / 13B active model size and strong agentic performance, ranking #2 on PinchBench. Z.ai’s GLM-5V-Turbo is a vision coding model with native multimodal fusion and a CogViT encoder, integrated into multiple platforms. TII’s Falcon Perception offers an open-vocabulary referring expression segmentation model with an early-fusion transformer and a competitive 0.3B OCR model. H Company’s Holo3 is a GUI-navigation model family based on Qwen3.5. A Claude Code leak revealed a minimalist agent core with a 4-layer context compression stack, 40+ tool modular architecture, and advanced features like task budget management and streaming tool execution. The leak highlights Anthropic’s agent design and operational sophistication.
not much happened today
gemini-3.1-pro gpt-5.2 opus-4.6 sonnet-4.6 claude-opus-4.6 google-deepmind anthropic context-arena artificial-analysis epoch-ai scaling01 retrieval benchmarking evaluation-methodology token-limits cost-efficiency instruction-following software-reasoning model-reliability dillonuzar artificialanlys yuchenj_uw theo minimax_ai epochairesearch paul_cal scaling01 metr_evals idavidrein xlr8harder htihle arena
Gemini 3.1 Pro demonstrates strong retrieval capabilities and cost efficiency compared to GPT-5.2 and Opus 4.6, though users report tooling and UI issues. The SWE-bench Verified evaluation methodology is under scrutiny for consistency, with updates bringing results closer to developer claims. Benchmarking debates arise over what frontier models truly measure, especially with ARC-AGI puzzles. Claude Opus 4.6 shows a noisy but notable 14.5-hour time horizon on software tasks, with token limits causing practical failures. Sonnet 4.6 improves significantly in code and instruction-following benchmarks, but user backlash grows due to product regressions.