All tags
Person: "aidan_gomez"
not much happened today
gemini-3.1-flash voxtral-tts cohere-transcribe gpt-5.4-mini gpt-5.4-nano glm-5-turbo reka-edge reka-flash-3 google-deepmind mistral-ai cohere openai zai reka-ai voice vision function-calling context-windows multimodality text-to-speech low-latency human-preference automatic-speech-recognition model-benchmarking cost-efficiency hallucination-detection multi-agent-systems open-source git-worktrees logan_kilpatrick sundar_pichai guillaume_lample aidan_gomez jay_alammar giffmana andrew_curran
Google launched Gemini 3.1 Flash Live, a realtime voice and vision agent model with 2x longer conversation memory, supporting 70 languages and 128k context. Mistral AI released Voxtral TTS, a low-latency, open-weight text-to-speech model supporting 9 languages and competitive with ElevenLabs. Cohere introduced Cohere Transcribe, an audio model with 14-language support and top English ASR leaderboard performance at 5.42 WER. OpenAI released smaller multimodal variants GPT-5.4 mini and GPT-5.4 nano with 400k context, noted for cost-competitiveness but high verbosity and hallucination rates. Other releases include GLM-5-Turbo by Zai, Reka Edge and Flash 3 on OpenRouter, and new multi-agent UX tooling Cline Kanban for orchestrating CLI coding agents.
Mergestral, Meta MTIAv2, Cohere Rerank 3, Google Infini-Attention
mistral-8x22b command-r-plus rerank-3 infini-attention llama-3 sd-1.5 cosxl meta-ai-fair mistral-ai cohere google stability-ai hugging-face ollama model-merging training-accelerators retrieval-augmented-generation linear-attention long-context foundation-models image-generation rag-pipelines model-benchmarking context-length model-performance aidan_gomez ylecun swyx
Meta announced their new MTIAv2 chips designed for training and inference acceleration with improved architecture and integration with PyTorch 2.0. Mistral released the 8x22B Mixtral model, which was merged back into a dense model to effectively create a 22B Mistral model. Cohere launched Rerank 3, a foundation model enhancing enterprise search and retrieval-augmented generation (RAG) systems supporting 100+ languages. Google published a paper on Infini-attention, an ultra-scalable linear attention mechanism demonstrated on 1B and 8B models with 1 million sequence length. Additionally, Meta's Llama 3 is expected to start rolling out soon. Other notable updates include Command R+, an open model surpassing GPT-4 in chatbot performance with 128k context length, and advancements in Stable Diffusion models and RAG pipelines.