All tags
Model: "gpt-5.4"
not much happened today
gpt-5.4-mini gpt-5.4-nano gpt-5.4 codex openai langchain stripe ramp coinbase nous-research hermes-agent coding multimodality subagents context-window model-performance pricing behavior-tuning secure-execution plugin-architecture attention-mechanisms agent-infrastructure hwchase17 michpokrass
OpenAI released GPT-5.4 mini and GPT-5.4 nano, their most capable small models optimized for coding, multimodal understanding, and subagents, featuring a 400k context window and over 2x speed compared to GPT-5 mini. The mini model approaches larger GPT-5.4 performance while using only 30% of Codex quota, becoming the default for many coding workflows. Pricing concerns and truthfulness tradeoffs were noted, with mixed third-party evaluations on reasoning and resistance to false premises. OpenAI also addressed behavior tuning issues in a recent update. Meanwhile, agent infrastructure is evolving with secure code execution and orchestration tools like LangChain's LangSmith Sandboxes and Open SWE, inspired by internal systems at Stripe, Ramp, and Coinbase. Subagents and secure execution are now key product features, with releases like Hermes Agent v0.3.0 showcasing plugin architectures, live Chrome control, and voice mode. Research on attention mechanisms, including Attention Residuals and vertical attention, is gaining traction.
not much happened today
kimi-linear-48b codex gpt-5.4 claude-code moonshot openai assemblyai langchain attention-mechanisms model-architecture inference-speed agent-feedback agent-skills multi-agent-systems knowledge-transfer cli-tools coding-agents model-deployment kimi_moonshot elonmusk yuchenj_uw nathancgy4 eliebakouch tokenbender behrouz_ali cloneofsimo fidjissimo sama gdb andrewyng itsafiz simplifyinai
Moonshot's Attention Residuals paper introduced an input-dependent attention mechanism over prior layers with a 1.25x compute advantage and less than 2% inference latency overhead, validated on Kimi Linear 48B total / 3B active. The paper sparked debate on novelty versus prior art like DeepCrossAttention and Googleโs earlier work, highlighting tensions in idea novelty, citation quality, and frontier-scale validation. OpenAI's Codex showed strong momentum with over 2M weekly active users, nearly 4x growth YTD, and GPT-5.4 hitting 5T tokens/day and a $1B annualized run-rate. Codex added subagents supporting multi-agent coding workflows. Infrastructure for coding agents matured with tools like Context Hub / chub supporting agent feedback loops, AssemblyAI's skill for Claude Code and Codex, and automated skill extraction from GitHub repos yielding 40% knowledge-transfer gains. LangChain launched LangGraph CLI and open-sourced Deep Agents, recreating top coding agent workflows with planning, filesystem ops, shell access, and sub-agents.
not much happened today
gpt-5.4 openai anthropic uber nous-research cursor_ai redisinc artificialanlys langchain-js agent-infrastructure mcp-protocol harnesses coding-agents evaluation-methodologies agent-ui-ux runtime-environments multi-axis-evaluation automation workflow-optimization open-agent-platforms provider-integration filesystem-checkpoints mattturck hwchase17 omarsar0 gergelyorosz htihle theprimeagen sydneyrunkle corbtt
Harnesses, agent infrastructure, and the MCP protocol are central themes, with emphasis on how harnesses, sandboxes, filesystem access, skills, memory, and observability shape agent UI/UX and runtime environments. Despite jokes about MCP's demise, it remains vital in production, notably used internally by Uber and supported by Anthropic. The coding-agent stack is evolving with CursorBench combining offline and online metrics to evaluate models on intelligence and efficiency, where GPT-5.4 leads in correctness and token efficiency. Agent-assisted development is splitting between automation-heavy workflows and "stay-in-the-loop" tooling, with OpenAI advancing Codex Automations featuring worktree vs. branch choices and UI customization. The open agent platform Hermes Agent v0.2.0 introduces full MCP client support, ACP server for editors, and expanded provider integrations including OpenAI OAuth.
not much happened today
gpt-5.4 gpt-5.2 gemini-3.1-pro openai artificial-analysis gemini claude mit figma github benchmarking physics-reasoning agentic-coding hallucination-detection context-windows cost-efficiency agent-prompting scheduled-tasks loop-patterns ai-evaluation design-code-integration agent-orchestration open-source
OpenAI rolled out GPT-5.4, achieving tied #1 on the Artificial Analysis Intelligence Index with Gemini 3.1 Pro Preview scoring 57 (up from 51 for GPT-5.2 xhigh). GPT-5.4 features a larger ~1.05M token context window and higher per-token prices ($2.50/$15 vs $1.75/$14 for GPT-5.2), with strengths in physics reasoning (CritPt) and agentic coding (TerminalBench Hard) but a higher hallucination rate and ~28% higher benchmark run cost. The GPT-5.4 Pro variant shows a +10 point jump on CritPt reaching 30% but at an extreme output token cost of $180 / 1M tokens. Community benchmarks show GPT-5.4 excels in agentic/coding tasks but mixed feedback on reasoning efficiency and literalness compared to Claude. OpenAI updated agent prompting guidance for GPT-5.4 API users, emphasizing tool use, structured outputs, and verification loops. Claude Code added local scheduled tasks and loop patterns for agents. The MCP framework is highlighted as a connective tissue for AI evaluation and design-code round-trips, with Truesight MCP enabling AI evaluation like unit testing and Figma MCP server supporting bidirectional design-code integration. Open-source T3 Code launched as an agent orchestration coding app built on Codex CLI.
GPT 5.4: SOTA Knowledge Work -and- Coding -and- CUA Model, OpenAI is so very back
gpt-5.4 gpt-5.4-pro openai cursor_ai perplexity_ai arena native-computer-use long-context efficiency steering benchmarking gpu-kernels attention-mechanisms algorithmic-optimization pipeline-optimization sama reach_vb scaling01 danshipper yuchenj_uw
OpenAI launched GPT-5.4 and GPT-5.4 Pro with unified mainline and Codex models, featuring native computer use, up to ~1M token context, and efficiency improvements including a new Codex
/fast mode. Benchmarks showed strong results like OSWorld-Verified 75.0% surpassing human baseline and GDPval 83% against industry pros. User feedback highlighted coding utility but raised concerns about pricing and overthinking. Integration with devtools like Cursor, Perplexity, and Arena was announced. In systems research, FlashAttention-4 (FA4) was introduced with near-matmul speed attention on Blackwell GPUs, featuring innovations like polynomial exp emulation and online softmax. "Steering mid-response" and "fewer tokens, faster speed" were emphasized as UX and efficiency improvements. not much happened today
gemini-3.1-flash-lite gpt-5.4 claude-opus-4.6 qwen-3.5 qwen google-deepmind openai anthropic alibaba nvidia meta-ai-fair hugging-face model-positioning latency cost-efficiency context-window extreme-reasoning agentic-ai model-updates general-agent-behavior visual-mathematics leadership-exits organizational-restructuring compute-access research-workflows open-weight-models ecosystem-dependence demishassabis natolambert poezhao0605 simonw
Gemini 3.1 Flash-Lite is highlighted by Demis Hassabis for its speed and cost-efficiency, focusing on latency and cost per capability rather than raw performance. NotebookLM Studio introduces a new feature for generating immersive cinematic video overviews. Rumors about GPT-5.4 suggest a ~1 million token context window and an "extreme reasoning mode" for long-horizon tasks, with speculation about monthly model updates from OpenAI. Anthropic's Claude Opus 4.6 is noted for strong general agent behavior but weaker visual mathematics performance. Alibaba's Qwen team faces leadership exits and restructuring, with concerns about compute access and organizational changes. Qwen models dominate research workflows, appearing in 41% of Hugging Face papers in 2025-2026, raising ecosystem dependence risks. The open-weight model landscape may consolidate around non-profits, NVIDIA, and Meta due to business incentives.
not much happened today
gemini-3.1-flash-lite gemini-3 gpt-5.3 gpt-5.4 qwen google-deepmind google openai alibaba multimodality latency throughput context-window model-pricing model-benchmarking model-performance conversational-ai hallucination-reduction api model-rollout leadership-exit jeffdean noamshazeer sundarpichai aidan_mclau justinlin610
Google DeepMind launched Gemini 3.1 Flash-Lite, emphasizing dynamic thinking levels for adjustable compute, with notable metrics like $0.25/M input, $1.50/M output, 1432 Elo on LMArena, and 2.5ร faster time-to-first-token than Gemini 2.5 Flash. It supports a 1M context window and high throughput for multimodal inputs including text, images, video, audio, and PDFs. OpenAI rolled out GPT-5.3 Instant to all ChatGPT users, improving conversational naturalness and reducing hallucinations by 26.8% with search. The upcoming GPT-5.4 was teased amid speculation. Alibaba's Qwen faces leadership exits, raising concerns about its future and open-source status. The news highlights advancements in model efficiency, pricing, and multimodality, alongside organizational changes impacting AI development.