All tags
Person: "lvwerra"
not much happened today
claude-code composer-2 cursor openai anthropic langchain cognition reinforcement-learning developer-tooling agent-systems agent-runtimes security credential-management multi-agent-systems model-training benchmarking software-engineering enterprise-ai kimmonismus mntruell theo ellev3n11 amanrsanger charliermarsh gdb yuchenj_uw neilhtennek simonw yuvalinthedeep lvwerra hrishioa
Cursor launched Composer 2, a frontier-class coding model with major cost reductions and strong benchmark scores like 61.3 on CursorBench and 73.7 on SWE-bench Multilingual. The model was improved via a first continued pretraining run feeding into reinforcement learning, trained across 3–4 clusters worldwide by a ~40-person team. OpenAI acquired Astral, the team behind Python tools uv, ruff, and ty, strengthening its developer platform. Anthropic expanded Claude Code with messaging app channels for persistent developer workflows. The focus in AI agents is shifting from single agents to managed fleets and runtimes, with LangChain launching LangSmith Fleet for enterprise agent management emphasizing agent identity, credential management, and auditability. Other launches include Cognition's teams of Devins, AgentUI by lvwerra, and discussions on agent runtimes with features like checkpointing and rollback. Security and permissions are emerging as critical constraints in agent system design.
The Ultra-Scale Playbook: Training LLMs on GPU Clusters
deepseek-native-sparse-attention r1-1776 paligemma-2-mix muse baichuan-m1-14b stripedhyena-2 huggingface deepseek perplexity-ai google-deepmind microsoft baichuan stripedhyena gpu-training scaling multimodality vision model-training foundation-models medical-llm genome-modeling robotic-manipulation interactive-content eliebakouch nouamanetazi lvwerra thom-wolf proftomyeh alex-wang aravsrinivas _akhaliq _philschmid mervenoyann reach_vb arankomatsuzaki maximelabonne
Huggingface released "The Ultra-Scale Playbook: Training LLMs on GPU Clusters," an interactive blogpost based on 4000 scaling experiments on up to 512 GPUs, providing detailed insights into modern GPU training strategies. DeepSeek introduced the Native Sparse Attention (NSA) model, gaining significant community attention, while Perplexity AI launched R1-1776, an uncensored and unbiased version of DeepSeek's R1 model. Google DeepMind unveiled PaliGemma 2 Mix, a multi-task vision-language model available in 3B, 10B, and 28B sizes. Microsoft introduced Muse, a generative AI model trained on the game Bleeding Edge, and presented Magma, a foundation model for multimodal AI agents excelling in UI navigation and robotic manipulation. Baichuan-M1-14B was announced as a state-of-the-art medical LLM trained on 20T tokens, and a fully open-source 40B genome modeling model using StripedHyena 2 architecture was also released. "Making your own gaming experience is coming sooner than you'd think," noted in relation to Muse.