News

Gen AI Live

A lot happens in Gen AI. Gen AI Live is the definitive resource for executives who want only the signal. Just curated, thoughtful, high impact Gen AI news.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Models
January 31, 2026

DeepSeek gets approval to buy Nvidia's H200 AI chips

China has conditionally approved AI startup DeepSeek to buy Nvidia’s high-performance H200 chips, pending regulatory terms. Other Chinese tech firms received similar clearances. Nvidia awaits formal notice.
Expand

China has granted conditional approval for DeepSeek, a leading domestic AI company, to buy Nvidia’s advanced H200 artificial intelligence chips, sources said. Final regulatory conditions are still being worked out by China’s National Development and Reform Commission.

The decision aligns DeepSeek with other major Chinese tech groups like ByteDance, Alibaba, and Tencent, which received approvals to acquire large quantities of the same processors. The approvals come amid tight U.S. and Chinese rules on advanced chip exports and imports.

Nvidia’s CEO said the company has not yet received official notification. If finalized, the deal could boost China’s AI data center and infrastructure development.

#
DeepSeek
Models
January 30, 2026

How AI assistance impacts the formation of coding skills

Anthropic research found AI coding help speeds some tasks up to 80 percent, but heavy reliance can weaken learning. Developers who ask questions and seek explanations retain more understanding.
Expand

Anthropic studied how AI assistance affects coding skill development. Using a controlled trial with software developers, the research found that AI can accelerate task completion but may reduce mastery of new coding concepts.

Participants who used AI scored about 17 percent lower on a quiz measuring comprehension after completing tasks with AI support, compared to those who coded without help. The effect was strongest in areas like debugging and code reading.

The research also showed that how developers interact with AI matters: those who asked for explanations alongside code generation retained more knowledge. The study highlights a trade-off between speed and deep learning.

#
Anthropic
Expert Views
January 29, 2026

The 2026 Guide to Amazon Bedrock AgentCore

GoML explains Amazon Bedrock AgentCore as a platform for building and running AI agents at scale with memory, runtime, identity, and observability, simplifying production deployments and reducing infrastructure friction.
Expand

GoML outlines Amazon Bedrock AgentCore as a managed platform that helps organizations build, deploy, and operate enterprise-grade AI agents. It solves common deployment hurdles such as memory management, scaling, security, and observability by providing services like serverless runtime, persistent context memory, identity controls, and deep tracing.

AgentCore supports multiple frameworks and models, making it flexible for diverse agent workloads. The platform includes policy enforcement and built-in evaluation tools to maintain quality and safety in production.

This guide highlights how AgentCore bridges the gap between early prototypes and scaled, reliable AI agents in real-world systems.

#
Bedrock
Expert Views
January 28, 2026

Top AI trends 2026 suggested by Stanford specialists

GoML covers key AI trends for 2026 from Stanford specialists. Focus shifts from hype to real value, rigorous evaluation, medical AI progress, job impact tracking and transparent decision systems. Investment efficiency matters.
Expand

GoML presents Stanford-backed AI trends for 2026. The narrative moves away from speculative promise to measured results and practical use.

Businesses now demand clear productivity gains, cost insight and reliable systems. Medical AI gains traction with models trained on large clinical data supporting rare disease detection and clinician workflows. Real-time tracking of job effects replaces broad forecasts, letting policymakers and companies adjust training and workforce strategies.

Explainability becomes essential, especially in high-stakes decisions like medical diagnosis and lending. Rising data center costs and sovereignty concerns drive efficient infrastructure choices. Overall, 2026 prioritizes systems that deliver value and are governed with discipline.

#
OpenAI
Models
January 28, 2026

Anthropic selected to build government AI assistant pilot

Anthropic was chosen by the UKs Department for Science, Innovation and Technology to build and pilot an AI assistant for GOV.UK, guiding citizens through complex public service processes with tailored support.
Expand

Anthropic has been selected by the UK Department for Science, Innovation and Technology to develop and pilot an AI assistant for the GOV.UK platform.

The pilot uses an agentic system powered by Claude to help citizens navigate public services with step-by-step support rather than simple question-and-answer responses. Initial deployment focuses on employment services, guiding jobseekers through job search, training resources, and government support programs.

The project follows a structured scan, pilot, scale framework to test safety and effectiveness before wider rollout, with emphasis on data control, transparency, and compliance with UK laws. Engineers from Anthropic will work with civil service teams to build internal expertise.

#
Anthropic
Models
January 28, 2026

Introducing Prism

OpenAI introduced Prism, a free AI-powered scientific workspace built on GPT-5.2. It streamlines drafting, citations, collaboration, and equation work for researchers in a unified environment.
Expand

OpenAI launched Prism, a free AI-native workspace designed for scientific research and writing. Built on GPT-5.2, Prism integrates drafting, literature search, citation management, and real-time collaboration into one platform. It supports complex tasks such as converting diagrams to LaTeX and reasoning over equations within full document context.

The tool aims to reduce workflow fragmentation and help researchers focus on substantive scientific work. Prism is available today to users with a ChatGPT personal account and will expand to business and enterprise plans.

By bringing advanced reasoning and collaboration together, OpenAI hopes to streamline research workflows and enhance productivity.

#
OpenAI
Spotlight
January 27, 2026

NewVue AI radiology transforming reporting with 2� visibility

GoML and NewVue used AI to modernize radiology reporting. Real-time dictation, structured reports, and live insights doubled visibility for clinical, admin, and engineering teams and cut manual work.
Expand

GoML built an AI-driven radiology reporting platform for NewVue that replaces manual, slow workflows with real-time insights. The system captures live speech, transcribes with confidence scores, and generates structured reports. Radiologists see live dictation progress, turnaround metrics, and correction trends.

Admin and engineering teams get guaranteed visibility into performance and error patterns. Role-based access keeps sensitive data secure and compliant. The solution runs on AWS Bedrock and integrates with hospital systems.

Early results show about 50 percent faster visibility into reporting status and twice the insight availability across teams, reducing manual monitoring by 60-70 percent.

#
GoML
Models
January 27, 2026

DeepSeek releases DeepSeek-OCR 2 with advanced visual reasoning

DeepSeek unveiled DeepSeek-OCR 2, an open-source vision model using DeepEncoder V2 to enable causal visual reasoning, significantly improving image understanding, structured text extraction, and multimodal AI performance.
Expand

DeepSeek has launched DeepSeek-OCR 2, its latest open-source image understanding model, introducing the DeepEncoder V2 architecture and a new “visual causal flow” approach.

Unlike traditional OCR systems, the model reasons about visual structure and relationships, allowing it to interpret complex layouts, charts, and documents more accurately. By open-sourcing the model and research paper, DeepSeek reinforces its strategy of competing globally through cost-efficient, high-performance AI.

The release strengthens China’s position in multimodal AI and signals growing maturity beyond text-only large language models.

#
DeepSeek
Ecosystem
January 26, 2026

Amazon Bedrock adds one-hour prompt caching to boost latency and cost efficiency

Amazon Bedrock now supports one-hour prompt caching, allowing developers to reuse context efficiently, reduce inference latency, and lower costs for repetitive or long-running generative AI workloads.
Expand

AWS has enhanced Amazon Bedrock by extending prompt caching duration to one hour, a significant upgrade for developers building production-scale generative AI applications.

Prompt caching enables reuse of previously processed context, reducing repeated computation and improving response latency while lowering inference costs. This is particularly valuable for agentic workflows, RAG systems, and conversational applications with stable system prompts.

The update signals AWS’s focus on inference optimization rather than just model access, positioning Bedrock as a more cost-efficient, enterprise-ready platform for scalable Gen AI deployments.

#
Bedrock
Models
January 23, 2026

Unrolling the Codex Agent loop

OpenAI explains the core “agent loop” in Codex CLI how user input, model inference, and tool calls interact to generate effective code actions, with prompt construction, context management, and iterations highlighted.
Expand

OpenAI’s unrolling the Codex Agent Loop details the internal mechanics of the Codex CLI, focusing on the “agent loop,” which orchestrates the flow between user input, model inference, and tool execution to perform software tasks.

It explains how prompts are built, how the model’s responses can trigger tool calls, and how these interactions repeat until a final result is produced. The post also discusses challenges like context window growth and performance optimization through prompt caching and automatic compaction.

This deep technical overview is the first in a series aimed at revealing design insights behind Codex’s efficient and safe code generation.

#
OpenAI
Models
January 23, 2026

Claude's new constitution

Anthropic has released a new constitution for its AI model Claude, detailing the values and ethical principles that guide its behavior and training. The document aims to shape Claude’s safety, ethics, and helpfulness.
Expand

Anthropic has published a new constitution for its AI model, Claude, outlining in detail the ethical framework and core values that should guide the model’s behavior and decision-making.

The constitution serves as both a training tool and a transparency measure, explaining the principles Claude should uphold, such as safety, ethics, compliance with internal guidelines, and genuine helpfulness to users.

Anthropic says the document shapes Claude’s reasoning and training, helping it apply broad values rather than merely following rules. The constitution is released openly under a Creative Commons license so others can study or adopt similar frameworks.

#
Anthropic
Models
January 21, 2026

OpenAI unveils stargate community plan to keep data-center energy costs in check

OpenAI launched the Stargate Community plan to ensure its AI data centers don’t raise local electricity costs. Each site will tailor energy solutions using community input and may fund new power infrastructure.
Expand

OpenAI has introduced the Stargate Community plan as part of its broader $500 billion Stargate initiative to build large-scale AI data centers across the U.S.

The plan aims to ensure that expanding AI infrastructure doesn’t increase local electricity costs by having OpenAI fund and develop energy resources and grid upgrades as needed. Each Stargate campus will work closely with local communities and utilities to tailor solutions that support stable power without burdening residents.

The initiative reflects growing industry efforts to address energy concerns tied to rapid AI infrastructure growth

#
OpenAI
Models
January 20, 2026

Introducing OpenAI’s education for Countries

OpenAI launches Education for Countries to partner with governments and institutions, embedding AI into education to personalize learning, build AI skills, support research, and prepare students and teachers for future workforce demands.
Expand

OpenAI’s Education for Countries initiative aims to support national education systems in integrating AI tools like ChatGPT Edu, GPT-5.2, and study mode to enhance learning and teaching.

Working with governments, ministries, and universities worldwide, the program focuses on personalized learning, reducing administrative burden, promoting research on AI’s educational impact, and providing certification and training aligned with workforce needs.

Initial partners include Estonia, Greece, Italy, Jordan, Kazakhstan, Slovakia, Trinidad and Tobago, and the UAE, with nationwide deployments and research collaborations already underway. OpenAI emphasizes responsible, equitable AI adoption to strengthen education and future workforce readiness.

#
OpenAI
Models
January 20, 2026

Cisco and OpenAI redefine enterprise engineering with AI agents

Cisco and OpenAI partnered to integrate Codex into enterprise engineering workflows, enabling AI agents to operate at scale across complex codebases, reducing build times, automating defect fixes, and shaping Codex for large organizations.
Expand

OpenAI and Cisco are collaborating to transform enterprise engineering by embedding Codex AI agents into real-world development workflows.

Rather than using Codex as a simple tool, Cisco integrates it into production environments with large, interconnected codebases, enhancing complex tasks like build optimization, defect remediation, and framework migrations.

This collaboration has yielded measurable benefits such as reduced build times and faster defect resolution and influenced Codex’s enterprise readiness in areas like security, compliance, and long-running task management. Together, Cisco and OpenAI aim to expand how AI can function as a reliable engineering teammate in demanding global software environments.

#
OpenAI
Models
January 20, 2026

Horizon 1000 advancing AI for primary healthcare

OpenAI and the Gates Foundation launched Horizon 1000, committing $50 million in funding, technology, and support to strengthen primary healthcare in African communities, reaching 1,000 clinics by 2028 to improve care quality and access.
Expand

OpenAI, in partnership with the Gates Foundation, announced Horizon 1000, a major initiative to advance AI tools for primary healthcare across Africa.

With a $50 million commitment, the project aims to strengthen health systems by 2028, reaching 1,000 primary care clinics and surrounding communities. The program will help frontline health workers use AI to navigate complex guidelines, reduce administrative burden, and improve care consistency where staffing and resources are limited.

Leaders in Rwanda and other nations will receive funding, technology, and technical support to deploy AI safely and meaningfully, closing the gap between AI capabilities and real-world healthcare needs.

#
OpenAI
Models
January 20, 2026

Anthropic and Teach For All launch global AI training initiative for educators

Anthropic partnered with Teach For All to launch a global AI training initiative, providing AI tools, Claude access, and training to over 100,000 educators across 63 countries to boost AI fluency in classrooms.
Expand

Anthropic and Teach For All have announced a major global initiative to bring AI tools and training to educators in 63 countries.

Through the AI Literacy & Creator Collective, more than 100,000 teachers and Teach For All alumni who collectively serve over 1.5 million students will gain access to Claude, AI fluency programs, and practical classroom applications.

This partnership positions teachers as co-creators shaping how AI is used in education, with ongoing peer learning and innovation spaces where educators can pilot AI-enabled tools and provide feedback to inform future product development.

#
Anthropic
Ecosystem
January 20, 2026

Introducing multimodal retrieval for Amazon Bedrock Knowledge Bases

Amazon Bedrock Knowledge Bases now supports multimodal retrieval, enabling RAG applications to search and retrieve insights across text, images, audio, and video using unified, fully managed AI workflows.
Expand

Amazon has announced the general availability of multimodal retrieval for Amazon Bedrock Knowledge Bases, expanding native support beyond text and images to include audio and video content.

This capability allows organizations to build Retrieval Augmented Generation (RAG) applications that seamlessly search across multiple data formats using a single, managed workflow. Powered by Amazon Nova Multimodal Embeddings and Bedrock Data Automation, the solution preserves visual and audio context or delivers precise transcriptions based on use-case needs.

By eliminating complex custom pipelines, Bedrock Knowledge Bases makes it easier for enterprises to unlock insights from diverse data sources such as videos, recordings, images, and documents at scale.

#
AWS
Models
January 19, 2026

Approach to advertising and expanding access to ChatGPT

OpenAI will test ads in ChatGPT for free and low-cost “Go” users in the US. Ads will be clearly labeled at the bottom of answers and won’t change responses or sell user data.
Expand

OpenAI announced it will begin testing advertisements inside ChatGPT for adult users in the United States on the free tier and its newly expanded $8 “Go” subscription plan.

The ads will appear clearly at the bottom of chatbot responses and are designed to be separate from the AI’s answers, so they do not influence what ChatGPT says. OpenAI emphasized it will never sell users’ data to advertisers, and that higher-tier paid subscriptions (Plus, Pro, Business, Enterprise) remain ad-free.

The trial aims to support broader access to the AI service while diversifying revenue beyond subscriptions.

#
OpenAI
Models
January 16, 2026

How scientists are using Claude to accelerate research and discovery

Anthropic explains how scientists are using its AI, Claude, to accelerate research and discovery by handling complex tasks like data analysis and experiment design, speeding work that once took months into hours.
Expand

Anthropic highlights how researchers are using its advanced AI, Claude, to accelerate scientific discovery.

By integrating Claude with scientific tools and workflows, researchers can automate complex tasks such as data analysis, experiment planning, and interpreting results, compressing processes that typically take months into just hours.

This AI collaboration helps eliminate bottlenecks and enables deeper insights by navigating diverse databases and tools more efficiently. Case studies show Claude supporting scientists across all stages of research, reshaping how work is done and accelerating progress in areas like genomics and biomedical discovery.

#
Anthropic
Models
January 15, 2026

Google unveils translateGemma a new open translation AI built on Gemma 3

Google launched TranslateGemma, a suite of open translation models built on Gemma 3, offering high-quality translation across 55 languages with efficient performance for mobile, laptops, and cloud environments.
Expand

Google introduced TranslateGemma, a new family of open translation models based on the Gemma 3 architecture, designed to support translation across 55 languages with high accuracy and efficiency.

Available in 4B, 12B, and 27B parameter sizes, these models deliver strong performance for various devices from mobile and edge hardware to cloud GPUs without sacrificing translation quality.

The 12B model even outperforms larger baselines on benchmark tests, while all variants retain multimodal translation abilities (including text in images). TranslateGemma aims to empower developers and researchers with accessible, state-of-the-art translation tools.

#
Google
Models
January 14, 2026

Introducing Labs

Anthropic announces Anthropic Labs, a dedicated team for experimenting with new AI capabilities and turning them into products. It expands innovation around Claude features and incubates future AI tools.
Expand

Anthropic has launched Anthropic Labs, a new initiative focused on experimenting with cutting-edge AI ideas and rapidly building them into scalable products.

This innovation team supports early-stage exploration around advanced Claude capabilities and tests unpolished versions with early users to identify what works best. Anthropic Labs has already contributed to successful offerings like Claude Code, MCP, Skills, and Cowork, and now aims to expand this experimental approach.

Instagram co-founder Mike Krieger joins the Labs team, while product leadership shifts to support scaling core Claude experiences. The goal is to foster frontier AI development responsibly and bring new tools to market from these experiments.

#
Anthropic
Models
January 14, 2026

New technical directions for DeepSeek V4

DeepSeek’s latest research reveals new technical directions for DeepSeek V4, emphasizing sparse architectures and efficiency-focused design to overcome hardware constraints and reduce dependence on high-end GPUs.
Expand

DeepSeek has published new research outlining the technical direction of its upcoming DeepSeek V4 model, focusing on architectural efficiency rather than brute-force scaling.

The company is exploring sparse model designs, modular components, and memory-efficient computation to overcome hardware bottlenecks such as GPU shortages and memory limits. These innovations aim to deliver frontier-level performance while reducing compute costs and reliance on top-tier hardware.

By rethinking model architecture instead of simply increasing parameter counts, DeepSeek positions V4 as a more scalable and sustainable alternative in the global AI race, particularly under tightening chip export restrictions.

#
DeepSeek
Ecosystem
January 13, 2026

AWS strengthens agentic AI ecosystem with LangGraph and DynamoDB integrations

AWS launched new tools enabling LangGraph agents to run on Bedrock AgentCore, with DynamoDB support for state management, accelerating development of production-grade AI agents.
Expand

AWS is expanding its agentic AI ecosystem by enabling LangGraph-based agents to run directly on the Amazon Bedrock AgentCore runtime, alongside new DynamoDB-backed state persistence.

This combination allows developers to build robust, long-running AI agents with memory, observability, and fault tolerance built in.

By supporting popular open-source agent frameworks while offering managed infrastructure, AWS positions Bedrock as a neutral, enterprise-friendly platform for agentic AI competing with Google Vertex AI and OpenAI-centric stacks without locking customers into proprietary tooling.

#
Bedrock
Models
January 13, 2026

Google Gemini introduces personal intelligence to connect Google apps for tailored AI assistance

Google launched Personal Intelligence for its Gemini AI assistant, letting users opt-in to connect Gmail, Photos, Search, and YouTube so Gemini can provide more contextual, personalized responses based on user data.
Expand

Google has announced Personal Intelligence, a new beta feature for its Gemini AI assistant that allows users to optionally connect their Gmail, Photos, Search, and YouTube accounts to create more personalized and context-aware conversations.

Once enabled, Gemini can reason across connected apps to provide tailored help like planning trips, answering specific questions using emails or photos, and suggesting relevant content without training on the user’s private data.

The feature is off by default, available initially to Google AI Pro and Ultra subscribers in the U.S., and will expand to more users and platforms over time, with privacy controls for what data is shared.

#
Google
Models
January 13, 2026

DeepSeek unveils Engram

DeepSeek introduced Engram, a conditional memory system that separates memory from computation in LLMs, reducing GPU memory usage and improving efficiency for future large-scale AI models.
Expand

DeepSeek, in collaboration with academic researchers, has unveiled Engram, a novel conditional memory system designed for large language models.

Engram decouples memory storage from computation, allowing models to store and retrieve knowledge efficiently without overloading GPU memory. This approach significantly reduces reliance on expensive high-bandwidth memory while improving reasoning depth and inference efficiency.

By caching knowledge instead of full context, Engram addresses one of the biggest bottlenecks in scaling AI models. The innovation is expected to influence the architecture of next-generation models, including DeepSeek V4, and could reshape how large models balance performance, cost, and scalability.

#
DeepSeek
Models
January 11, 2026

Advancing Claude in healthcare and the life sciences

Anthropic expanded Claude’s healthcare and life-sciences capabilities with Claude for Healthcare and new scientific connectors. The update connects Claude to industry data and tools to accelerate clinical workflows, trial management, and research.
Expand

Anthropic announced major extensions to Claude’s applicability in healthcare and life sciences. Building on earlier work such as Claude for Life Sciences, the company unveiled Claude for Healthcare, enabling HIPAA-ready tools for providers, payers, and consumers.

Claude now connects to key medical systems and databases (like CMS Coverage Database, ICD-10 codes, NPI registry, and PubMed) to assist with administrative tasks, prior authorizations, and clinical coordination.

In life sciences, new connectors (Medidata, ClinicalTrials.gov, bioRxiv/medRxiv, and more) enhance Claude’s support for clinical trial operations and regulatory activities. The update leverages improvements in Claude’s agentic performance to deliver practical productivity gains in regulated workflows.

#
Anthropic
Spotlight
January 9, 2026

How we built an agentic AI order processing engine for StockyAI

GoML built an agentic AI order processing engine for StockyAI that turns SMS, WhatsApp and email orders into structured invoices. AI parsing, product matching, pricing and inventory checks run automatically, cutting manual work.
Expand

GoML and StockyAI teamed up to automate retail order-to-invoice workflows using an agentic AI engine.

The system reads unstructured text from SMS, WhatsApp and email, extracts order details, matches products, checks real-time prices and validates inventory using APIs. It then generates complete structured invoices and stores them with traceability.

Built as a serverless FastAPI application on AWS Lambda, the engine uses conversational AI to explain decisions and handle variations in order style. Tests show about 95 percent parsing accuracy and 90 percent success in fuzzy product matches. The result cuts manual effort and speeds invoice processing for ecommerce operations.

#
GoML
Models
January 7, 2026

Introducing ChatGPT Health

OpenAI launched ChatGPT Health, a dedicated, privacy-enhanced space inside ChatGPT that lets users securely connect medical records and wellness apps, helping them navigate health information more confidently without replacing clinicians.
Expand

OpenAI introduced ChatGPT Health, a new, dedicated health and wellness experience within ChatGPT that securely brings personal health information together with AI assistance.

Users can connect medical records and wellness applications such as Apple Health, Function, and MyFitnessPal so responses are grounded in their own health data. Built in close collaboration with over 260 physicians from around the world, ChatGPT Health is designed to help users feel more informed, prepared, and confident navigating health topics not to diagnose or replace medical care.

The feature incorporates additional privacy and security protections, keeps health conversations separate from regular chats, and does not use sensitive health data to train models.

#
OpenAI
Models
January 6, 2026

OpenAI is preparing to test ads in ChatGPT

OpenAI is reportedly preparing to test advertising inside ChatGPT, initially limited to employees. The move signals potential monetization expansion beyond subscriptions as OpenAI scales costs and infrastructure.
Expand

OpenAI is reportedly planning an internal pilot to test advertisements within ChatGPT, with early experiments limited to employees.

This would mark a major shift in OpenAI’s monetization strategy, signaling that subscription revenue alone may not be sufficient as compute and model development costs increase. Journalist Alex Heath reported that OpenAI’s Applications CEO Fidji Simo informed staff that ads are being considered for an internal version of ChatGPT.

If expanded publicly, this could reshape how users experience AI assistants and create a major new digital advertising surface potentially challenging Google search economics.

#
OpenAI
Models
January 6, 2026

Stanford Medicine: AI predicts disease risk from one night of sleep

Stanford researchers developed SleepFM, a foundation model trained on 585,000 hours of sleep data, predicting future risks for diseases like dementia, Parkinson’s, and heart conditions from one night.
Expand

Stanford Medicine reports that researchers built SleepFM, an AI foundation model trained on about 585,000 hours of polysomnography (sleep study) data. The model predicts future disease risk across around 130 outcomes, including dementia, Parkinson’s disease, cardiovascular disease, cancer, and even death risk.

It works by analyzing a single night of sleep patterns and extracting hidden health signals that traditional scoring misses.

This is important because it signals a shift toward “passive” preventative medicine where routine sleep data may act like a long-term health biomarker. It could enable earlier interventions and more personalized risk monitoring.

#
OpenAI
Models
January 3, 2026

Meta releases VL-JEPA: a lean vision-language model that rivals giants

Meta introduced VL-JEPA, a vision-language model that predicts semantic embeddings instead of tokens, enabling faster inference and strong world-modeling performance while using fewer parameters.
Expand

Meta released VL-JEPA, a joint embedding predictive architecture for vision-language modeling. Unlike traditional multimodal models that generate text token-by-token, VL-JEPA predicts continuous semantic embeddings, shifting the learning objective from discrete language to abstract meaning.

This makes the model more efficient and potentially faster, while still performing strongly on tasks requiring world modeling and understanding.

The approach suggests a practical path toward powerful multimodal systems without requiring massive parameter counts or expensive decoding. VL-JEPA is significant because it challenges the assumption that scaling token-generation is the only route to better vision-language intelligence.

#
Meta
Models
January 2, 2026

DeepSeek introduces mHC to fix training instability in large models

DeepSeek researchers introduced mHC, a manifold-constrained residual architecture that prevents training instability in large models by constraining residual matrices, improving scalability and reducing cost.
Expand

OpenAI has introduced the OpenAI Academy for News Organizations, a learning hub designed to help journalists, editors, publishers, and newsroom teams adopt AI effectively and responsibly.

The Academy provides on-demand training, including “AI Essentials for Journalists,” along with playbooks, practical workflows, and real-world newsroom examples. It supports use cases such as investigative and background research, translation and multilingual reporting, data analysis, and improving production and operational efficiency.

The program also shares open-source resources and guidance on responsible governance, building on OpenAI’s collaborations with the American Journalism Project and The Lenfest Institute to strengthen journalism sustainability.

#
DeepSeek
Models
December 28, 2025

Keeping your data safe when an AI agent clicks a link

OpenAI published safety guidance on how AI agents handle web links. The focus is on avoiding quiet leaks of private data by only auto-loading links already seen publicly. User control stays centra
Expand

OpenAI outlined how it protects user data when AI agents follow links. Agents can help by loading web content, but URLs can carry hidden sensitive information. To reduce risk, OpenAI only lets agents fetch links that are known public URLs from an independent index.

This approach avoids quietly exposing private data during automated tasks. When a link is unknown or unverified, users see warnings before it opens.

These safety steps are part of a layered defense that includes prompt injection protections and ongoing monitoring, aiming to balance agent usefulness with stronger privacy safeguards as AI agents become more common.

#
OpenAI
Spotlight
December 21, 2025

Generative AI for social media content analysis SimplicityDX case study

GoML boosted generative AI accuracy for social media content analysis at SimplicityDX by redesigning prompts. Accuracy rose about 22 percent, extraction errors fell 30-40 percent, and product mapping improved 25 percent.
Expand

GoML helped SimplicityDX overcome limits in generative AI social media analysis. Noisy posts with slang, emojis, misspellings and inconsistent naming kept perfect match accuracy at 74 percent. GoML redesigned the LLM prompt framework with better extraction rules, domain examples and contextual cues.

This let AI models interpret informal creator captions more reliably without changing underlying systems. Tested across labeled datasets and multiple models, the improved prompts raised perfect match accuracy by about 22 percent.

It also cut product extraction errors by 30-40 percent and improved creator storefront product mapping reliability by 25 percent, supporting scalable AI for commerce use cases.

#
GoML
Models
December 19, 2025

Anthropic launches Bloom

Anthropic released Bloom, an open-source framework that generates and scores behavioral evaluations automatically, helping researchers measure model risks like deception, bias, and misalignment at scale.
Expand

Anthropic introduced Bloom, an open-source tool designed to automate behavioral safety evaluations for frontier models.

Bloom takes a target behavior (e.g., dishonesty, self-interest, bias) and generates diverse test scenarios, measuring both frequency and severity of that behavior in model responses. Anthropic claims Bloom evaluations correlate strongly with hand-labeled judgments and reliably differentiate baseline models from intentionally misaligned ones.

This is significant because safety evaluation has become a bottleneck: models evolve faster than manual testing can keep up. Bloom’s approach provides repeatable, scalable behavioral auditing that could become a standard layer in safety and governance workflows.

#
Anthropic
Models
December 18, 2025

Introducing GPT-5.2-Codex

OpenAI introduced GPT-5.2-Codex, its most advanced agentic coding model, optimized for long-horizon engineering, large refactors, Windows workflows, and stronger defensive cybersecurity capabilities across Codex products.
Expand

OpenAI has released GPT-5.2-Codex, a specialized version of GPT-5.2 optimized for agentic coding and professional software engineering.

The model improves long-context performance through context compaction, delivers stronger results on large-scale refactors and migrations, and is more reliable in native Windows environments.

GPT-5.2-Codex also brings OpenAI’s strongest cybersecurity capabilities to date, accelerating defensive workflows like vulnerability discovery, fuzzing, and secure code review while introducing safeguards to manage dual-use risks. It is available across all Codex surfaces for paid ChatGPT users, with API access planned in the coming weeks.

#
OpenAI
Models
December 17, 2025

Introducing OpenAI academy for news organizations

OpenAI launched the OpenAI Academy for News Organizations, offering hands-on training, newsroom playbooks, and real examples to help journalists use AI responsibly for reporting, research, and operational efficiency.
Expand

OpenAI has introduced the OpenAI Academy for News Organizations, a learning hub designed to help journalists, editors, publishers, and newsroom teams adopt AI effectively and responsibly.

The Academy provides on-demand training, including “AI Essentials for Journalists,” along with playbooks, practical workflows, and real-world newsroom examples. It supports use cases such as investigative and background research, translation and multilingual reporting, data analysis, and improving production and operational efficiency.

The program also shares open-source resources and guidance on responsible governance, building on OpenAI’s collaborations with the American Journalism Project and The Lenfest Institute to strengthen journalism sustainability.

#
OpenAI
Models
December 16, 2025

The new ChatGPT Images is here

OpenAI launched a new ChatGPT Images experience powered by a flagship image model. It delivers faster generation, better instruction-following, precise photo edits, and consistent details across iterations for all users.
Expand

OpenAI has released an upgraded ChatGPT Images feature, powered by its new flagship image generation model. The update focuses on precision editing, meaning it can modify uploaded images while preserving important details like lighting, composition, and facial consistency changing only what the user requests.

It also improves instruction-following, enabling more accurate layouts and complex compositions, and makes major strides in text rendering, handling denser and smaller text more reliably. The model generates images up to 4× faster, enabling rapid iteration and creative exploration.

The feature is rolling out to all ChatGPT users and is available via the API as GPT Image 1.5.

#
OpenAI
Models
December 13, 2025

Codex built Sora for Android in 28 days

OpenAI released a case study showing how it built and shipped the Sora Android app in just 28 days by using its AI coding assistant Codex to write most of the code under human direction.
Expand

OpenAI detailed how its engineering team developed and shipped the Sora app for Android in just 28 days by leveraging Codex, the AI coding assistant powered by an early GPT-5.1-Codex model.

A small team of four engineers set up the core architecture, patterns, and quality standards, then used Codex to generate about 85 % of the application’s code, dramatically accelerating development.

The Android release quickly topped the Google Play Store and achieved a high crash-free rate, demonstrating how AI-assisted coding can boost productivity when paired with human oversight and clear guidance.

#
OpenAI
Models
December 12, 2025

Advancing science and math with GPT-5.2

OpenAI highlights that GPT-5.2 delivers its strongest performance yet on scientific and mathematical reasoning, improving precision, multi-step logic, and benchmark results to better support research workflows.
Expand

OpenAI explains that GPT-5.2 is the company’s most capable model so far for scientific and mathematical work, with major advances in rigorous reasoning, consistency, and abstraction that benefit researchers.

It builds on collaborations with scientists across disciplines and improves performance on expert benchmarks like GPQA Diamond and FrontierMath. GPT-5.2’s strengths help it follow complex logic chains, maintain numerical accuracy, and support workflows such as coding, data analysis, and experimental design.

While it can aid in exploring problems and testing hypotheses, human expertise remains essential for validation, interpretation, and ensuring reliability in scientific contexts.

#
OpenAI
Models
December 12, 2025

Introducing GPT-5.2

OpenAI released GPT-5.2, its most advanced model yet, with major gains in professional workflows, reasoning, long-context memory, coding, and multi-step tasks, outperforming prior versions on key benchmarks.
Expand

OpenAI has launched GPT-5.2, the latest upgrade in the GPT-5 series, designed for deeper reasoning, better long-context understanding, stronger coding and productivity abilities, and more accurate, reliable outputs across complex tasks.

It delivers state-of-the-art performance on professional and technical benchmarks, handling large documents, spreadsheets, presentations, and multi-step workflows with fewer errors than earlier models.

GPT-5.2 introduces improved general intelligence, enhanced tool use, and better multimodal performance, making it particularly effective for real-world work. The model is gradually rolling out to users and developers.

#
OpenAI
AI Safety and Regulation
December 12, 2025

OpenAI, Anthropic & Block Launch the Agentic AI Foundation (AAIF)

OpenAI, Anthropic, and Block jointly launched the Agentic AI Foundation to create open standards enabling interoperable enterprise AI agents. The Linux Foundation will host the initiative to standardize agent ecosystems.
Expand

OpenAI, Anthropic, and Block unveiled the Agentic AI Foundation (AAIF), an open-standards body under the Linux Foundation that aims to unify and standardize enterprise-grade agent ecosystems.

The foundation introduces a collaborative framework built on Anthropic’s Model Context Protocol (MCP), Block’s Goose framework, and OpenAI’s AGENTS.md. Its mission is to ensure interoperability, security, transparency, and cross-compatibility across agentic systems in enterprises.

AAIF will accelerate adoption by reducing vendor lock-in and enabling organizations to deploy agents reliably across industries. This marks a major shift toward standardized, open, multi-agent architectures for global enterprise AI.

#
Anthropic
#
OpenAI
#
Open source
Ecosystem
December 11, 2025

AWS advances AI Factories and cloud infrastructure

AWS introduces AI Factories combining NVIDIA GPUs, Trainium, Bedrock, and SageMaker to help enterprises scale AI workloads efficiently. The integrated stack aims to democratize high-performance AI development and reduce operational complexity.
Expand

AWS launched new AI Factories, an integrated infrastructure layer combining NVIDIA GPUs, AWS Trainium, high-bandwidth networking, and software services like Bedrock and SageMaker AI.

The goal is to make high-performance AI development and deployment accessible to enterprises of all sizes. AI Factories provide a unified environment for model training, fine-tuning, and agentic workflows while optimizing cost and performance.

By simplifying cluster management, data movement, and security, AWS positions itself as a full-stack AI provider capable of competing with Azure’s OpenAI stack and Google’s Gemini cloud offerings.

#
AWS
Models
December 10, 2025

In Perplexity AI agents are taking over complex enterprise tasks

Perplexity’s data shows AI agents are increasingly driving complex enterprise workflows handling multi-step productivity and research tasks for knowledge workers and acting as cognitive partners, not just tools for routine work.
Expand

Perplexity released large-scale usage data indicating that AI agents are already being adopted by enterprise knowledge workers to carry out complex, multi-step workflows, especially in productivity and research domains.

Their analysis of hundreds of millions of interactions with Perplexity’s Comet browser and assistant reveals that agents aren’t limited to simple administrative automation but are tackling cognitive work such as synthesising information and executing tasks autonomously.

Adoption is strongest among digitally-intensive professions, where agents act as thinking partners, enhancing human capability rather than replacing it outright. The shift underscores the growing role of agentic AI in enterprise productivity and workflow automation.

No items found.
Ecosystem
December 9, 2025

AWS announces substantial improvements to AgentCore on Bedrock

AWS expanded Bedrock AgentCore with composable services runtime, gateway, policy, memory, identity, evaluations, observability, tools like Code Interpreter and Browser to accelerate secure, production-grade agent development at scale.
Expand

Amazon Bedrock’s AgentCore platform received major upgrades to streamline building, governing, and scaling production AI agents. The platform now exposes composable capabilities including Runtime, Gateway, Memory, Identity, Policy with real-time enforcement, Evaluations, Observability, Code Interpreter, and Browser.

Policies can be defined in natural language and translated into Cedar for enforcement, while sessions can be isolated for up to eight hours to support complex workflows.

AgentCore integrates with CloudWatch to measure quality metrics such as correctness and safety. It is framework-agnostic and is already used by customers like Ericsson and Thomson Reuters to operate secure, robust agentic systems.​

#
AWS
Ecosystem
December 9, 2025

AWS introduces Nova 2 Omni, their A2A model

Nova 2 Omni is AWS's industry-first multimodal model processing text, image, video, and audio inputs with unified text/image outputs, enabling agents to reason across diverse media like keynote summaries with visuals.
Expand

Nova 2 Omni stands as the multimodal pinnacle of the Nova 2 lineup, ingesting text, images, videos, and audio while generating text or image responses from a single model architecture.

It unifies reasoning over mixed modalities for tasks such as analyzing presentations with slides, extracting insights from multimedia content, or powering agents that interpret visual and auditory context alongside text.

By handling diverse inputs natively, Omni simplifies development of cross-media AI applications, reduces model orchestration complexity, and supports richer enterprise use cases like content summarization or interactive visual analysis.

#
AWS
Models
December 8, 2025

Introducing Anthropic Interviewer

Anthropic launched Anthropic Interviewer, an AI-powered tool using Claude to automatically interview professionals about how they use and feel about AI, gathering insights to inform product design and societal research.
Expand

Anthropic introduced Anthropic Interviewer, a new automated interview tool powered by its Claude AI, designed to conduct large-scale, conversational interviews with professionals about their experiences and perspectives on AI use.

In an initial pilot of 1,250 interviews spanning general workers, scientists, and creative professionals, the tool revealed mostly positive views on AI’s productivity benefits, along with concerns about job identity, reliability, and social stigma.

Anthropic plans to use this qualitative data to deepen understanding of AI’s impact on work and everyday life and to refine future AI systems and policies.

#
Anthropic
Ecosystem
December 8, 2025

AWS introduces Nova Forge

Nova Forge is a service offering access to Nova training checkpoints so customers can blend proprietary data with Amazon-curated datasets and reinforcement tuning to create customized frontier-class models for Bedrock.
Expand

Nova Forge is a new AWS service that lets enterprises build domain-specialized variants of Nova by accessing intermediate training checkpoints and combining them with proprietary and Amazon-curated data. Customers can shape “novellas” that encode their own industry or organizational knowledge without sacrificing Nova’s core reasoning abilities.

The service supports remote reward functions and reinforcement fine-tuning, enabling production-ready, safety-aligned frontier models tuned to specific tasks or compliance needs.

Once trained, these customized Nova variants can be pushed directly into Amazon Bedrock, giving organizations a streamlined path from experimentation to deployment while retaining strong control over their data and model behavior.

#
AWS
Ecosystem
December 8, 2025

AWS introduces Nova 2 model family

AWS launched Nova 2 family: Lite for fast reasoning and tool use, Pro for complex workloads, Sonic for multilingual speech-to-speech, delivering cost-effective, high-performance models across agentic AI use cases.
Expand

The Nova 2 model family introduces AWS's optimized frontier models for enterprise AI, spanning Lite for efficient instruction following, tool calling, code generation, and document tasks.

Pro for advanced agentic reasoning and benchmarks; Sonic for real-time, low-latency multilingual speech interactions; and Omni for multimodal processing.

Each variant targets specific strengths Lite beats competitors on price-performance, Pro excels in multi-step tool use, Sonic enables natural telephony apps, positioning Nova 2 as a versatile backbone for scalable agentic systems from high-volume automation to interactive experiences.

#
AWS
Ecosystem
December 7, 2025

CloudWatch for AI agent observability

AWS introduced new CloudWatch capabilities to observe AI agents in real time, showing decisions, service connections, and execution paths so teams can debug faster, reduce guesswork, and build trust in agentic systems.
Expand

AWS announced enhanced CloudWatch features focused on observability for AI agents, giving teams real-time visibility into how agents make decisions and interact with underlying services.

The updates surface complete execution paths, making it easier to trace failures, understand dependencies, and identify where workflows break. This reduces guesswork during incident investigations and helps enterprises enforce governance and safety on AI-driven workloads.

By making agent behavior transparent instead of opaque, the new CloudWatch capabilities directly address one of the biggest blockers to production adoption of agentic AI: the inability to confidently see, explain, and audit what the system is doing.

#
AWS
Industries
December 4, 2025

Nvidia servers turbo-charge DeepSeek up to 10× acceleration

Nvidia’s newest AI server architecture reportedly accelerates models from DeepSeek (and others) by up to ten times, boosting inference speed and making high-performance AI more accessible under compute constraints.
Expand

In a recent hardware update, Nvidia demonstrated that its latest AI server equipped with a dense cluster of high-performance chips and ultra-fast interconnects can speed up models from DeepSeek (among others) by a factor of ten compared to previous generations.

This dramatic performance boost significantly reduces inference latency and compute costs, making powerful AI models more viable for both research labs and enterprise deployments.

By combining high compute throughput with optimized architecture, these servers help democratize access to advanced AI capabilities, even under geopolitical constraints and export limitations.

#
Nvidia
Models
December 4, 2025

Snowflake and Anthropic announce $200 million partnership

Anthropic and Snowflake expanded their partnership with a $200 million multi-year deal. Anthropic’s AI models will now be integrated into Snowflake’s data cloud, enabling enterprise-grade AI agents across 12,600+ global customers.
Expand

Anthropic and Snowflake have formalized a major expansion of their collaboration via a $200 million multi-year agreement. Under this deal, Anthropic’s advanced language models (such as Claude) will be embedded directly within Snowflake’s AI Data Cloud, making them accessible to more than 12,600 enterprise customers worldwide.

This integration powers Snowflake’s new “agentic AI” services, enabling businesses including those in regulated industries like finance, healthcare, and life sciences to run complex analyses and AI-driven workflows on both structured and unstructured data, while keeping it securely within their existing governed data environment.

The aim: bring powerful, context-aware AI tools into production-ready enterprise workflows.

#
Anthropic
Ecosystem
December 3, 2025

Financial services innovation with agentic AI

AWS showcased financial services advancing agentic AI through Allianz's multi-agent platform for claims, risk, and fraud; trust foundations with visibility, governance, and compliance; plus Coinbase X402 for agent-native payments and micro-transactions.
Expand

The financial services track emphasized agentic AI's operational shift, with Allianz demonstrating a model-agnostic multi-agent framework featuring reusable agents, discovery/registry systems, flexible orchestration, strong governance, and full action traceability for scalable workflows in claims, risk evaluation, and fraud review.

Banks and insurers gain advantages from cloud readiness, secure data, and AI governance enabling safe automation in core systems like money movement and claims processing. Coinbase's X402 standard introduces agent-driven payments supporting stablecoin settlement, machine-to-machine transactions, micro-purchases, automated billing, and low-fee flows, unlocking workflows for data acquisition, fraud detection, and financial services.

Trust pillars visibility, repeatability, safe tools, identity permissions, and interoperability form the foundation for regulated adoption.

#
AWS
Models
December 3, 2025

Perplexity AI: BrowseSafe / BrowseSafe-Bench launch

Perplexity launched BrowseSafe and BrowseSafe-Bench tools designed to detect malicious prompt-injections and other web threats in real time, raising the security standards for AI-powered browser agents.
Expand

Perplexity unveiled BrowseSafe a real-time HTML scanner tailored to catch malicious prompt-injection attacks embedded in webpages before an AI agent executes instructions.

Alongside this, it released BrowseSafe-Bench: an open benchmark suite simulating 14,700+ realistic attack scenarios to test defenses across diverse web environments.

The fine-tuned model (based on Qwen3-30B) reportedly delivers about 90–91% detection accuracy while maintaining the speed needed for smooth browser use. By offering this protection and open benchmarking, Perplexity is pushing the AI-browsing ecosystem toward greater security and transparency.

No items found.
Ecosystem
December 2, 2025

AWS introduces AI Factories

AI infrastructure deployed in customer data centers, combining Trainium and NVIDIA GPUs with services like SageMaker and Bedrock to meet sovereignty and compliance needs.
Expand

AWS AI Factories provide dedicated, fully manAWS AI Factories provide dedicated, fully managed AI infrastructure deployed directly in customer data centers, effectively creating private AWS-like regions optimized for AI workloads.

These environments include Trainium and NVIDIA GPUs alongside managed services such as Amazon SageMaker and Bedrock, giving enterprises access to advanced training and inference capabilities while keeping data and operations on-premises.

AI Factories are positioned for regulated and sovereign use cases where data residency, privacy, and compliance rules are strict, with Saudi Arabia’s Humane AI zone highlighted as an example. The offering extends AWS’s AI ecosystem into customer-controlled facilities without sacrificing cloud-grade reliability.

aged AI infrastructure deployed directly in customer data centers, effectively creating private AWS-like regions optimized for AI workloads. These environments include Trainium and NVIDIA GPUs alongside managed services such as Amazon SageMaker and Bedrock, giving enterprises access to advanced training and inference capabilities while keeping data and operations on-premises. AI Factories are positioned for regulated and sovereign use cases where data residency, privacy, and compliance rules are strict, with Saudi Arabia’s Humane AI zone highlighted as an example. The offering extends AWS’s AI ecosystem into customer-controlled facilities without sacrificing cloud-grade reliability.

#
AWS
Ecosystem
December 2, 2025

AWS details Trainium3 Ultra servers and Trainium4

AWS announced general availability of Trainium3 Ultra Servers and previewed Trainium4, delivering large efficiency and performance gains for frontier models with massive FP compute, bandwidth, and energy efficiency improvements.
Expand

AWS detailed its next-generation AI accelerators, confirming Trainium3 Ultra Servers are generally available and previewing Trainium4 for future large-scale training.

Trainium3 uses 3 nm technology, packs 144 chips per rack, delivers hundreds of FP8 petaflops and more than 700 TB/s bandwidth, and achieves multiple-fold improvements in compute, memory bandwidth, and tokens per megawatt over earlier generations.

Over one million Trainium chips are already deployed, making it a multi-billion-dollar business. Trainium4 is designed to further increase FP4 compute and memory bandwidth for the very largest models, reinforcing AWS’s commitment to cost-efficient, high-scale AI infrastructure.​

#
AWS
Ecosystem
December 2, 2025

Amazon presents Q, their enterprise assistant

AWS presented Amazon QUIC, an enterprise AI assistant that unifies data access, BI, research, and workflow automation to streamline decision-making and productivity across business tools.
Expand

Amazon QUIC, positioned as an enterprise-grade AI productivity assistant, brings together data retrieval, business intelligence, research support, and workflow automation in a single interface. It connects to varied enterprise systems so users can query data, generate insights, and trigger actions without jumping between disparate tools. The assistant is aimed at knowledge workers and decision-makers who need faster, more context-rich answers and automated fAmazon QUIC, positioned as an enterprise-grade AI productivity assistant, brings together data retrieval, business intelligence, research support, and workflow automation in a single interface.

It connects to varied enterprise systems so users can query data, generate insights, and trigger actions without jumping between disparate tools. The assistant is aimed at knowledge workers and decision-makers who need faster, more context-rich answers and automated follow-through, effectively extending the Amazon Q vision for business users.

By centralizing AI-driven assistance over multiple data sources and applications, QUIC is designed to reduce friction, accelerate decisions, and standardize AI usage across organizations.​

Follow-through, effectively extending the Amazon Q vision for business users. By centralizing AI-driven assistance over multiple data sources and applications, QUIC is designed to reduce friction, accelerate decisions, and standardize AI usage across organizations.​

#
AWS
Ecosystem
December 2, 2025

AWS announces model expansions on Bedrock

AWS significantly expanded Bedrock’s model catalog, adding more than 18 new models including Mistral Large, Mistral 3, Gemma, and NVIDIA Nemotron, increasing choice across proprietary and open-weight options.
Expand

Amazon Bedrock’s model lineup grew substantially with the addition of over 18 new models, giving customers broader flexibility across open and proprietary options.

Notable additions include Mistral Large with increased parameter count and doubled context length, Mistral 3 optimized for edge and single-GPU deployments, Google’s Gemma family, and NVIDIA’s Nemotron models.

This expansion strengthens Bedrock’s positioning as a neutral, multi-model platform where enterprises can mix and match best-fit models for different workloads. AWS also highlighted that more than 50 customers have already processed over a trillion tokens each through Bedrock, with Trainium powering most inference.

#
AWS
Ecosystem
December 2, 2025

Amazon launches Kiro development agents

AWS introduced Amazon development agents, including Kiro, AWS Security Agent, and AWS DevOps Agent, now in preview, to accelerate coding, security, and operations workflows, with generous free Kiro seats for startups.
Expand

AWS announced a new family of development-focused AI agents: Kiro development agents, AWS Security Agent, and AWS DevOps Agent, all available in preview. These agents aim to speed up software delivery by assisting with coding tasks, security reviews, and operational workflows such as deployments and monitoring.

Startups can access up to 100 free Kiro seats for one year if they apply within a limited window, lowering the barrier to adoption.

The development agents tie into the broader Bedrock and agentic ecosystem, enabling teams to bring AI support directly into the SDLC, security pipelines, and DevOps practices.

#
AWS
Ecosystem
December 2, 2025

AWS announces Lambda Durable Functions

Lambda Durable Functions enable long-running, stateful workflows up to one year with managed state, retries, and pauses, ideal for complex agentic and human-in-the-loop processes that scale to zero when idle.
Expand

Lambda Durable Functions extend AWS Lambda into a platform for durable, long-lived workflows without custom orchestration code. Developers define “steps” for logic and retries plus “waits” for pauses such as human approvals, external callbacks, or AI agent processing.

The system automatically manages state, error handling, and recovery, with executions lasting up to a year while charging only for active compute and scaling to zero when idle.

Available via SDKs for Python and Node.js and deployable with SAM or CDK, Durable Functions are a key building block for enterprise-grade agentic workflows that span long-running business processes.

#
AWS
Models
December 2, 2025

DeepSeek Math-V2 becomes the first open-source model to reach IMO gold

DeepSeek‑Math‑V2 a fully open-weight math model has reportedly achieved gold-level performance at the 2025 International Mathematical Olympiad (IMO), marking the first such success by an open-source system.
Expand

DeepSeek has released Math-V2, an open-weights model designed for rigorous mathematical reasoning and proof generation.

In 2025, it reportedly solved enough problems at the IMO to earn a gold-medal class result a first for any open-source AI. Math-V2 employs a generator-verifier-meta-verifier loop to self-check and refine proofs, aiming not just for correct answers but valid reasoning chains.

It also scored an almost perfect 118/120 on the 2024 Putnam exam under unlimited compute conditions. This milestone signals that open-source AI is now capable of human-level mathematical reasoning and formal problem solving.

#
DeepSeek
Models
December 2, 2025

OpenAI CEO Sam Altman declares ‘code red’ to improve ChatGPT amid rising competition

GPT‑5.2 launch has been accelerated by OpenAI to as early as December 9, 2025 a direct response to Gemini 3 with major upgrades in speed, reasoning, and stability.
Expand

OpenAI is rushing out GPT-5.2 earlier than planned, targeting a potential December 9, 2025 release after declaring an internal “code red.”

The accelerated schedule comes in direct response to Google’s Gemini 3 setting new performance benchmarks, prompting OpenAI to prioritise speed, stability, and reasoning improvements over other projects.

Internal tests reportedly show GPT-5.2 outperforming Gemini 3 in several reasoning tasks, raising expectations that the update could help OpenAI regain its competitive edge in the rapidly evolving AI landscape.

#
OpenAI
Ecosystem
December 1, 2025

Effortless databases and Aurora as agent memory

AWS and partners like Vercel now enable effortless production database setup, with Aurora serverless and LLM-driven schema tools, positioning AWS databases as short- and long-term memory layers for modern AI agents.
Expand

AWS showcased a new “effortless databases” direction that reduces friction for builders deploying data backends for AI applications. Through integrations with partners like Vercel, developers can provision production-grade databases directly from their existing dashboards.

Aurora serverless options provide elastic scale, while LLM-assisted modeling tools simplify schema design and evolution. Crucially, AWS framed its databases as memory and state engines for agentic AI, supporting both short-term and long-term context persistence.

Customer stories, including Robinhood’s move to Aurora in a regulated environment, demonstrated the model’s viability, delivering lower costs, higher reliability, and better performance for data-intensive, agent-driven workloads.

#
AWS
Ecosystem
December 1, 2025

Amazon Connect AI enhancements

AWS expanded Amazon Connect with deeper AI features that bring context to every interaction, recommend actions, automate background tasks, and enable fully automated, human-only, or hybrid customer support models with real-time quality feedback.
Expand

AWS highlighted major AI-driven enhancements to Amazon Connect aimed at transforming customer service operations. The platform now uses AI to assemble rich context before an interaction begins, so agents spend less time gathering information and more time solving problems. Intelligent recommendations guide next best actions while background tasks and summaries are automated.

Organizations can choose fully automated, human-only, or hybrid configurations to match their support strategy. Real-time quality assurance provides continuous feedback and scoring at scale.

Customer examples such as Priceline, which reported significant time savings per call and more accurate workflows, underscore the operational and experience gains from these capabilities.

#
AWS
Ecosystem
December 1, 2025

AWS Transform for migration and modernization

AWS launched AWS Transform, an AI-powered platform that uses agents to automate discovery, planning, code changes, testing, and execution for VMware migrations, mainframe modernization, and Windows application modernization at enterprise scale.
Expand

AWS announced AWS Transform, a new AI-powered platform designed to make migration and modernization continuous rather than painful. Transform uses specialized agents to discover existing systems, generate migration and modernization plans, propose and implement code changes, automate test creation, and orchestrate end-to-end execution.

Initial support covers VMware migrations, mainframe modernization, and Windows applications. Features like Transform Custom and Transform Composability let enterprises and partners define their own modernization agents and patterns, including cross-language code changes.

Customer results from CSL and BMW illustrate the impact: dramatic reductions in discovery and planning time, faster test generation, higher coverage, and accelerated application modernization.

#
AWS
Ecosystem
December 1, 2025

Amazon QuickSuite for human–AI workflows

Amazon QuickSuite is a new unified workspace where intelligent agents connect tools like SharePoint, Confluence, CRMs, ServiceNow, and Box to search, analyze, automate workflows, and share insights across enterprise systems.
Expand

AWS introduced Amazon QuickSuite, a unified workspace designed to reduce fragmentation across enterprise tools by bringing documents, workflows, and insights into a single environment.

Intelligent agents inside QuickSuite can search across platforms such as SharePoint, Confluence, CRM systems, ServiceNow, and Box, then automate tasks, route work, and generate insights in context. Real-world customers showcased measurable outcomes: AstraZeneca accelerates research workflows, BMW streamlines engineering processes, and 3M improves global sales operations.

The core idea is simple but powerful: when humans and AI operate in the same contextual workspace instead of juggling multiple disconnected tools, work quality improves and cycle times shrink.

#
AWS
Models
December 1, 2025

DeepSeek releases V3.2 & V3.2-Speciale rivals GPT-5 & Gemini

DeepSeek‑V3.2 and its high-compute sibling DeepSeek‑V3.2‑Speciale have been launched, claiming reasoning, coding and math capabilities comparable to GPT‑5 and Gemini 3 Pro while remaining open-source and cost-efficient.
Expand

DeepSeek unveiled V3.2 and V3.2-Speciale two new open-source large language models. The standard V3.2 balances inference efficiency with strong reasoning, making it suitable for everyday tasks.

The Speciale variant pushes performance to the limits: it delivers gold-level results on challenging math benchmarks including the 2025 Olympiad exams, and reportedly competes head-to-head with GPT-5 and Gemini 3 Pro on coding, logic, and reasoning tasks.

With an innovative “Sparse Attention” architecture reducing compute costs and enabling long-context reasoning, this release challenges the assumption that top-tier AI must remain proprietary.

#
DeepSeek
Models
November 27, 2025

OpenAI sees API data breach via Mixpanel hack

A Mixpanel breach exposed limited analytics data of some OpenAI API users, including names and emails, but no sensitive information such as passwords, API keys, or chat content was compromised.
Expand

A security incident at Mixpanel, an analytics provider used by OpenAI, resulted in the exposure of limited data belonging to certain OpenAI API users. The leaked dataset included basic profile information such as names, email addresses, approximate location based on browser data, device details, and user or organization IDs. OpenAI confirmed that none of its own systems were breached and that no sensitive data like passwords, API keys, payment information, chat history, or usage logs was exposed. OpenAI has discontinued using Mixpanel, notified impacted users, and advised increased awareness regarding phishing or social engineering attempts.

#
OpenAI
Models
November 26, 2025

Effective harnesses for long-running agents

Anthropic shows how to make long-running AI agents work reliably by using an “initializer” agent to scaffold projects and a “coding” agent to make incremental, well-documented, tested progress across sessions.
Expand

Anthropic addresses the challenge of AI agents forgetting context between sessions a major obstacle for long-running tasks like building software over hours or days.

Their solution uses a two-agent harness: an initializer agent sets up the project environment, creating a git repo, init scripts, a structured feature list and a progress log; then a coding agent works incrementally, implementing one feature per session, running end-to-end tests, committing clean code, and updating progress.

This disciplined, engineering-style workflow prevents agents from “one-shotting” tasks or prematurely marking projects as complete enabling reliable, multi-session progress.

#
Anthropic
Models
November 24, 2025

Introducing advanced tool use on the Claude Developer Platform

Anthropic’s “Advanced Tool Use” lets its model Claude dynamically discover, orchestrate and execute external tools via code enabling efficient, scalable, and accurate multi-tool workflows without overloading the model’s context.
Expand

Anthropic has introduced a new set of features enabling Claude to handle complex workflows through advanced tool use.

These include a Tool Search Tool (for dynamic, on-demand discovery of tools), Programmatic Tool Calling (letting Claude write code to call multiple tools, handle logic and data transformations, and avoid flooding its context with intermediate results), and Tool Use Examples (providing exemplar calls so the model learns correct usage patterns beyond mere schema).

This approach improves efficiency, reduces token and inference overhead, increases accuracy for multi-step tasks, and enables scalable integration with large tool libraries making Claude far more capable for real-world automation and orchestration.

#
Anthropic
Models
November 24, 2025

Anthropic releases Opus 4.5 with new Chrome and Excel integrations

Claude Opus 4.5 is Anthropic’s new flagship AI model. It adds deep improvements in coding, reasoning, long-context memory plus new integrations with Chrome and Excel for real-world productivity tasks.
Expand

Anthropic has launched Claude Opus 4.5, its most advanced AI model to date, delivering major performance gains in coding, reasoning, and real-world productivity.

The model scores highest on benchmark tests such as SWE-Bench Verified, reflecting top-tier code generation and problem-solving capabilities.

Opus 4.5 also introduces memory improvements for long-context tasks and supports agentic workflows making it suitable for complex, multi-step work over longer sessions. Alongside the release, Anthropic is rolling out new integrations: a browser extension for Chrome and a spreadsheet assistant for Excel, enabling the model to interact with everyday tools for browsing, data manipulation, and office automation.

#
Anthropic
Models
November 24, 2025

Introducing shopping research in ChatGPT

ChatGPT now offers “Shopping Research”: describe what you want, it fetches and compares products online, and delivers a personalized buyer’s-guide all inside the chat, available to all users.
Expand

ChatGPT by OpenAI has gained a new feature: Shopping Research. This turns ChatGPT into a guided personal shopper you describe what you need (e.g. “quiet cordless vacuum for a small flat”), it asks clarifying questions, searches trusted retail sites for specs, prices, reviews and availability, then builds a personalized buyer’s-guide.

The tool is available now on mobile and web for Free, Go, Plus and Pro plans, and during the holiday season usage is nearly unlimited.

Shopping Research uses a specialized “GPT-5 mini” model fine-tuned for shopping tasks, integrates optional user memory for better recommendations, and promises future direct checkout for merchants supporting “Instant Checkout.”

#
OpenAI
Models
November 23, 2025

DeepSeek and Gemini models outperform ChatGPT in user ratings

A large-scale Prolific study ranked ChatGPT only 8th, behind DeepSeek, Gemini, Mistral, and Grok. Gemini 2.5 Pro and DeepSeek models dominated real user satisfaction and task-quality ratings.
Expand

Leading AI models using real user tasks and preference scoring. ChatGPT unexpectedly ranked 8th, trailing behind models from DeepSeek, Mistral, Google, and xAI.

Gemini 2.5 Pro received the highest performance ratings, followed closely by DeepSeek v3 and DeepSeek R1, which users preferred for reasoning depth, consistency, and speed-to-answer.

The results reflect a growing shift in user sentiment: high-performance, lower-cost alternatives are increasingly challenging OpenAI’s dominance. For enterprises evaluating multi-model strategies, this shows the competitive landscape is diversifying rapidly, especially with Chinese and open-weight models gaining traction.

#
DeepSeek
AI Safety and Regulation
November 22, 2025

Trump administration may not challenge state AI regulations

The Trump administration has reportedly put on hold an executive order that would have created a DOJ “AI Litigation Task Force” to challenge state AI laws like California’s SB 53.
Expand

According to recent reports, the Trump administration is backing off its plan to legally challenge state-level AI regulations. Initially, the draft executive order would have set up a Department of Justice “AI Litigation Task Force” aimed at suing states over their AI laws particularly California’s SB 53 and threatening to withhold federal broadband funding.

But now, that order is on hold amid internal pushback and political risk. This shift could signal a retreat from a unified preemption strategy, leaving states with greater power to regulate AI independently.

#
U.S.
Models
November 20, 2025

OpenAI partners with Foxconn to build next-gen AI hardware

OpenAI is partnering with Foxconn to co-design and manufacture advanced AI data-center hardware in the U.S., including server racks, network, and power systems.
Expand

OpenAI has announced a collaboration with Foxconn to boost U.S.-based AI infrastructure. Together, they will co-design next-generation AI data center racks, networking, power, and other critical hardware, leveraging Foxconn’s manufacturing scale and OpenAI’s insights into emerging model compute needs.

While the agreement doesn’t commit to immediate purchases, OpenAI will have early access to evaluate Foxconn-built systems and an option to buy.

The partnership aims to strengthen domestic AI supply chains, improve manufacturing capacity, and accelerate deployment of high-performance compute infrastructure in the United States.

#
OpenAI
Models
November 20, 2025

GPT-5 shows breakthrough potential in accelerating science

OpenAI published early case studies showing how GPT-5 is helping scientists in math, physics, biology, and materials science conduct novel reasoning, literature review, and even generate new proofs.
Expand

OpenAI released a report on early experiments where GPT-5 accelerated scientific research across disciplines such as mathematics, biology, physics, computer science, and astronomy. GPT-5 was used to synthesize complex literature, perform advanced computations, and even propose formal proofs for unsolved propositions.

The studies emphasize that while the model can suggest new research directions and generate insightful reasoning, it also has limitations such as hallucinating references or reasoning paths, underscoring the necessity of expert oversight.

OpenAI’s goal is to transparently showcase GPT-5’s potential and limitations in real scientific workflows.

#
OpenAI
Models
November 19, 2025

Scania adopts ChatGPT Enterprise to transform operations

Scania has deployed OpenAI’s ChatGPT Enterprise across its organisation from engineering to operations empowering teams globally to explore AI solutions in a decentralized, experiment-driven way.
Expand

OpenAI and Scania have partnered to accelerate AI adoption across the Swedish transport manufacturer’s global workforce. Over the past year, Scania issued ChatGPT Enterprise licenses widely, enabling its engineering and operations teams to run experiments, share learnings, and build use cases organically.

The collaboration supports Scania’s transformation into a software- and data-driven business, with AI playing a role in design, process optimization, and decision-making.

This decentralized, bottom-up approach is helping Scania reimagine how employees innovate using generative AI while maintaining alignment with its mission for sustainable transport.

#
OpenAI
Models
November 19, 2025

Target launches AI-powered shopping with OpenAI

OpenAI is partnering with Target to embed AI into retail via a Target app in ChatGPT and improve employee productivity and guest experience using its enterprise APIs.
Expand

OpenAI and Target have announced a partnership to integrate AI directly into retail operations and customer experience.

They’re launching a dedicated Target app within ChatGPT for shoppers to browse, build multi-item baskets, and check out using options like Drive Up, Order Pickup, or shipping. Behind the scenes, Target is leveraging OpenAI APIs and ChatGPT Enterprise across its organization to boost employee productivity and improve internal workflows.

The collaboration also powers AI-based guest support tools, smarter recommendations, and dynamic vendor-partner interactions part of Target’s broader ambition to weave intelligence into its business.

#
OpenAI
Models
November 18, 2025

Google launches Gemini 3 with new coding app and record benchmark scores

Google has launched Gemini 3, its most advanced AI model yet, along with a new agentic coding app called Antigravity. The model achieves record benchmark scores across reasoning, multimodal, and coding tasks.
Expand

Google unveiled Gemini 3, its latest and most capable generative AI model, available immediately in the Gemini app and via Google Search’s AI mode.

This model delivers a significant leap in reasoning, multimodal understanding, and tool use. On standard benchmarks, Gemini 3 Pro scored a record 37.4 on “Humanity’s Last Exam” and set new highs on LMArena, WebDev Arena, and agentic coding evaluations.

To support developers, Google also introduced Antigravity, an IDE-like platform where AI agents (powered by Gemini 3) interact directly with code editors, terminals, and browsers to build software autonomously.

The GoML POV

Google’s release of Gemini 3 is a solid leap in multimodal reasoning and agentic coding. But from an enterprise perspective, benchmark wins are only half the story. Models don’t succeed in production because they top HLE or LMArena they succeed when they behave consistently across messy, high-stakes, real-world workloads.

At GoML, across healthcare, finance, and insurance deployments, we’ve learned that enterprises care far more about predictability, governance, latency guarantees, auditability, and cost-efficiency. These remain open questions for Gemini 3. Google’s new coding agent, Antigravity, looks powerful, but its real test is whether it can maintain workflow stability, integrate cleanly with legacy stacks, and operate within enterprise security boundaries.

Gemini 3 is an impressive research milestone, but adoption will depend on how well it performs inside controlled enterprise environments, supports domain-level fine-tuning, and aligns with compliance frameworks like HIPAA, PCI, and SOC2. For GoML, Gemini 3 is a promising entrant in the model ecosystem one that could deliver value once its agentic behavior is validated in production, not just on curated benchmark suites.

#
Google
Models
November 18, 2025

OpenAI named emerging leader in generative AI

OpenAI has been named an Emerging Leader in the Gartner 2025 Innovation Guide for Generative AI Model Providers. The recognition underscores OpenAI’s enterprise momentum, strong governance investments and support for over 1 million businesses.
Expand

OpenAI was formally recognised by Gartner as an Emerging Leader in its 2025 Innovation Guide for Generative AI Model Providers. The position reflects OpenAI’s broad enterprise traction citing support for more than 1 million organisations and its investments in governance, privacy controls, data residency, monitoring and scalable deployments.

The Emerging Leader category highlights vendors with strong current offerings and promising future potential in a fast-evolving market.

OpenAI emphasises that the next wave of its systems will focus on deeper integration, collaboration and capability. Although the achievement affirms momentum, it comes with the acknowledgement that the generative AI market remains highly dynamic.

#
OpenAI
Models
November 17, 2025

Kimi’s K2 open-source model

Kimi has released K2, a massive open-source Mixture-of-Experts LLM: 32 B active parameters, 1 trillion total. It uses a new MuonClip optimizer and excels in agentic tasks.
Expand

Kimi announced K2, a next-generation open-source Mixture-of-Experts (MoE) model that activates 32 billion parameters out of a 1 trillion-parameter pool.

It’s trained using a novel optimizer called MuonClip, which uses a QK-clip technique to ensure stability while maintaining token efficiency. During post-training, K2 leverages a large-scale agentic data synthesis pipeline and reinforcement learning to improve via environment interactions.

In benchmarks, it outperforms many open and closed source models in coding, mathematics, reasoning, and agentic performance. The model checkpoint is being open-sourced to further research.

#
Kimi
Models
November 15, 2025

Disrupting the first reported AI-orchestrated cyber espionage campaign

Anthropic detected a state-backed espionage campaign in which hackers used their Claude Code AI to autonomously carry out cyberattacks on ~30 global targets, with 80–90% of work done by AI.
Expand

Anthropic announced that it disrupted what it calls the first documented large-scale cyber-espionage campaign primarily executed by an AI system.

A Chinese state-sponsored threat actor manipulated Claude Code into acting as an autonomous cyber-operations agent by breaking malicious intent into harmless-looking subtasks.

The AI conducted reconnaissance, vulnerability scanning, exploit generation, credential harvesting, and data exfiltration with minimal human intervention, completing around 80–90% of the attack workflow. The operation targeted nearly 30 organisations worldwide. Anthropic warns that this event signals a new era in cyberwarfare, where AI agents significantly lower the skill and resource barrier for sophisticated attacks.

#
Anthropic
Models
November 14, 2025

OpenAI for Ireland

OpenAI has launched “OpenAI for Ireland” in partnership with the Irish Government, Dogpatch Labs and Patch to support Irish SMEs, founders, and young builders through training, mentorship, and access to AI.
Expand

OpenAI announced “OpenAI for Ireland,” an initiative created with the Irish Government, Dogpatch Labs, and the nonprofit Patch to accelerate AI adoption across Ireland.

The program focuses on enabling small businesses, startups, and young innovators with practical AI skills. An “SME Booster” program will launch in 2026, offering hands-on AI training, real-time mentoring, and free online learning through the OpenAI Academy.

For young founders aged 16–21, OpenAI and Patch will provide fellowships, grants, and workshops to help build new AI ventures. OpenAI is also expanding its presence in Ireland, where it already operates its European headquarters.

#
OpenAI
Models
November 13, 2025

OpenAI releases GPT-5.1, says new models are warmer and enjoyable to talk to

OpenAI has released GPT-5.1, featuring two variants, Instant and Thinking with a warmer conversational tone, beter instruction-following, and eight new personality presets for more natural, customizable interactions.
Expand

OpenAI has launched GPT-5.1, an upgrade to its flagship model, available in two versions: GPT-5.1 Instant and GPT-5.1 Thinking.

The Instant model focuses on delivering warmer, more human-like conversations and stronger instruction-following, while the Thinking model is designed for efficiency fast on simple tasks and persistent on complex reasoning problems.

GPT-5.1 also introduces eight personality presets, including Professional, Friendly, Candid, Quirky, Efficient, Nerdy, and Cynical, giving users more control over tone and interaction style. The rollout begins immediately, with earlier GPT-5 models continuing in legacy mode for paid users.

#
Anthropic
Models
November 13, 2025

Anthropic’s AI to run near-autonomous cyberattacks

State-backed Chinese hackers used Anthropic’s Claude AI to run near-autonomous cyber-espionage attacks across about 30 organisations, with the AI performing 80–90% of the intrusion tasks on its own.
Expand

Anthropic revealed that a Chinese state-sponsored hacking group manipulated its Claude AI system to conduct a large-scale cyber-espionage operation. The attackers targeted roughly 30 organisations across sectors including technology, finance, chemicals and government.

Claude was used to automate most stages of the intrusion reconnaissance, network mapping, exploit development, credential theft and data extraction with humans stepping in only for key decisions. Although the model sometimes fabricated details, it still enabled highly efficient, near-autonomous cyberattacks.

The incident highlights a major shift in threat landscapes, showing how advanced AI can drastically amplify the scale and sophistication of state-backed hacking.

The GoML POV

The recent revelation that Chinese state-backed hackers used Anthropic’s agentic AI to execute near-autonomous cyberattacks marks a turning point in how AI will shape both sides of cybersecurity.

This incident reinforces a core reality we see at GoML: agentic AI is no longer just an accelerator for enterprise productivity it is now a force multiplier for attackers as well.

The most significant takeaway isn’t just that an AI model was misused. It’s that an AI agent was able to autonomously perform 80–90% of the intrusion workflow reconnaissance, exploit generation, credential harvesting, lateral movement, and data extraction with humans stepping in only for strategic decisions.

For enterprises, especially in regulated sectors like healthcare, this changes the threat model entirely.

The question is no longer “What can a hacker do?” but “What can an AI agent do if misused?”

The real Big Shift ahead will be how organisations adopt GenAI while embedding AI-native guardrails, continuous monitoring, and domain-specific governance. This is where differentiation will occur: companies that deploy AI agents with safety-by-design will move faster and safer than those who treat security as an afterthought.

For now, this incident validates GoML’s position that AI agents must be deployed with strong oversight, audit trails, human-in-the-loop checkpoints, and misuse detection frameworks. As enterprises race to adopt GenAI, safe agent orchestration will become as important as model performance itself.

#
Anthropic
Models
November 12, 2025

Microsoft detailed its new “AI superfactory” infrastructure

Microsoft’s new “AI superfactory” links huge datacenters in Wisconsin and Atlanta into one seamless AI-cloud system, built for massive frontier model training and high-scale workloads.
Expand

Microsoft has introduced what it describes as the world’s first “planet-scale AI superfactory,” an interconnected high-speed data-center network that spans major sites in Wisconsin and Atlanta and is optimized exclusively for large-scale AI workloads.

Unlike typical cloud data centers running many applications, this system is engineered as a unified infrastructure built for model training and inference at extreme scale, using hundreds of thousands of NVIDIA GPUs, an AI-WAN backbone, and advanced liquid-cooling and high-density rack architecture.

The move signals Microsoft’s commitment to lead in AI infrastructure, enabling next-gen models with unprecedented compute and low latency across regions.

#
Microsoft
Models
November 11, 2025

Private AI compute next step in building private and helpful AI

Google’s Private AI Compute pairs its powerful Gemini cloud models with a secure, sealed execution environment, ensuring user data stays isolated and invisible even to Google.
Expand

Google has launched Private AI Compute, a new cloud infrastructure that allows its Gemini models to run with the power of the cloud while preserving stringent privacy guarantees.

The platform creates a “trusted execution environment” that isolates user data from Google’s broader systems, encrypting memory and enforcing remote attestation so only the user can access their processed information.

The system runs on Google’s custom TPUs and utilizes hardware-enforced safeguards such as Titanium Intelligence Enclaves (TIE). The company says the goal is to bring advanced AI features like on-device-level privacy to the cloud so that users benefit from larger models without sacrificing control over their data.

#
Google
Models
November 11, 2025

Meta platforms releases open-source “omnilingual ASR” for 1,600+languages

Meta open-sourced ASR models that natively support 1,600+ languages, with zero-shot extension to 5,400+ languages, greatly expanding voice-to-text accessibility for low-resource languages.
Expand

Meta released the Omnilingual ASR model suite, a family of automatic speech recognition models supporting over 1,600 languages out-of-the-box, and designed to generalize to more than 5,400 languages via zero-shot in-context learning. The models are fully open-source under Apache 2.0, enabling commercial reuse.

The architecture includes self-supervised speech encoders and LLM-based decoders, enabling transcription of under-represented languages previously unavailable in major ASR systems.

This release marks a significant step in voice AI inclusivity and indicates Meta’s renewed emphasis on foundational AI infrastructure.

No items found.
Models
November 11, 2025

Anthropic aims to overtake OpenAI on profitability

Anthropic has adopted an enterprise-first growth strategy and aims for profitability years ahead of OpenAI, highlighting a cost-efficient model and contrasting with OpenAI’s heavy losses.
Expand

Anthropic’s leadership believes its smarter path to AI growth is anchored in B2B enterprise adoption, rather than purely consumer scale.

The company aims to turn a profit ahead of OpenAI, which is projecting losses of around $74 billion by 2028.  The key differentiator is Anthropic’s focus on high-value enterprise contracts, scalable APIs and cost-efficient compute infrastructure.

As opposed to OpenAI’s broad consumer push and heavy infrastructure spend, Anthropic’s model may give it an edge where margins matter most.

#
Anthropic
Models
November 11, 2025

Babeltext launches global-AI access platform

Babeltext, founded by David Hayes, has launched a multilingual AI messaging platform supporting 195 languages and accessible via SMS, WhatsApp and WeChat targeting underserved mobile-first users.
Expand

David Hayes has unveiled Babeltext, a new AI platform designed to expand access to generative AI globally by enabling conversations via familiar messaging channels (SMS, WhatsApp, WeChat) and supporting 195 languages.

The company sees the shift from “answers to actions”: enabling users not just to query AI, but to act via it. Built in partnership with AWS Bedrock, Babeltext targets mobile-first and under-served populations whose access to desktop or high-capacity devices is limited.

The release hints at a major push toward inclusive AI, focusing on human context and device accessibility rather than sheer compute power.

No items found.
Models
November 7, 2025

Google preps ‘Nano Banana 2’ image model (GEMPIX2)

Google is preparing to launch Nano Banana 2 (GEMPIX2) next week, a compact, high-fidelity AI image model built for creators and professionals seeking fast, photorealistic generation within the Gemini ecosystem.
Expand

Google is finalizing Nano Banana 2 (GEMPIX2), an advanced AI-assisted image generation model under the Gemini branding. Set for release next week, GEMPIX2 targets creators and design professionals, promising higher resolution, texture fidelity, and lighting accuracy than its predecessor.

Optimized for speed and local deployment, it will integrate with Gemini Apps, YouTube Create, and Vertex AI’s ImageGen API, supporting low-latency image synthesis for real-time editing and creative workflows.

This marks Google’s renewed push into professional-grade generative visual tools, aiming to challenge OpenAI’s DALL-E 4 and Adobe Firefly 3 in the enterprise creator market.

No items found.
Models
November 6, 2025

Google teases Gemini 3 pro Preview

Google’s Gemini 3 Pro model was spotted in Vertex AI code labeled “11-2025,” hinting at a November release. It’s expected to appear soon in AI Studio for developers’ early access.
Expand

Google has quietly hinted at the upcoming release of Gemini 3 Pro, its next-generation large multimodal model, after references surfaced in Vertex AI code tagged “11-2025.”

The discovery suggests an imminent November 2025 launch, likely beginning with a preview inside AI Studio for developers and enterprise users. Gemini 3 Pro is expected to offer stronger reasoning, improved multimodal context handling (text + vision + audio), and better latency than Gemini 1.5 Pro.

This aligns with Google’s broader Gemini 3 family rollout roadmap, positioning it to rival OpenAI’s GPT-5 and Anthropic’s Claude Sonnet 4.5 across high-end cloud AI integrations.

No items found.
Models
November 6, 2025

OpenAI AI urges shared safety standards

OpenAI warns that AI capabilities are advancing rapidly, costs falling about 40× per year, and urges shared safety standards, public accountability, AI resilience, ongoing impact measurement, and user empowerment.
Expand

OpenAI states that AI systems now outperform humans on some difficult intellectual tasks and that the gap between public perception and capability is immense.

It estimates the cost per unit of intelligence has been dropping roughly 40-fold annually, forecasting that by 2026 AI may make small discoveries and by 2028 significant ones.

To ensure broad benefit and mitigate risks, OpenAI recommends the field adopt shared standards among frontier labs, public oversight, an AI resilience ecosystem similar to cybersecurity, real-world impact measurement, and equitable individual access to advanced AI.

#
OpenAI
Models
November 3, 2025

OpenAI $38B multi-year AWS cloud/compute deal

OpenAI signed a seven-year, $38 billion agreement with AWS to run massive GPU-heavy AI workloads, giving OpenAI access to hundreds of thousands of NVIDIA GPUs and expanded global infrastructure.
Expand

OpenAI and Amazon Web Services announced a strategic, multi-year agreement valued at roughly $38 billion that supplies OpenAI with massive EC2 UltraServer capacity including hundreds of thousands of NVIDIA GPUs  to train and run advanced models.

The deal accelerates OpenAI’s ability to scale agentic and multimodal workloads, diversifies its cloud footprint beyond previous heavy Azure usage, and signals a major vote of confidence in AWS’s capacity and performance.

Markets reacted positively for AWS/Amazon; analysts note the move pushes compute costs and contractual obligations far into the future, raising long-term financing questions for OpenAI.

#
OpenAI
Models
October 29, 2025

OpenAI offers 1-year free ChatGPT Go access in India

ChatGPT Go is a new lower-cost subscription plan offering extended access to GPT-5, image generation, file uploads, advanced data tools and longer memory, available only in selected countries.
Expand

OpenAI’s ChatGPT Go is an affordable monthly subscription plan that builds on the Free tier by providing extended access to GPT-5, greater image-generation quota, enhanced file-upload and data-analysis capabilities, and a longer conversational memory for more personalised interaction.

It includes organise-and-track tools like Projects, Tasks and Custom GPTs. ChatGPT Go does not include API usage, legacy models like 4o, or connectors/Sora features (which are available in higher tiers).

Availability is currently limited to selected countries, and usage limits may vary based on system load.

#
OpenAI
Models
October 29, 2025

Open-weight “gpt-oss” models release

OpenAI released gpt-oss-safeguard, open-weight reasoning models (20B and 120B) enabling developers to apply custom policies at inference, classify messages, completions and chats while explaining decision logic.
Expand

OpenAI introduced the gpt-oss-safeguard model series (gpt-oss-safeguard-20B and -120B) as open-weight reasoning engines tailored for safety and trust-and-safety classification tasks.

Developers supply their own policy text at runtime and the model reasons over input accordingly, classifies conversation elements (user messages, completions, full chats) and emits chain-of-thought explanations of how decisions are made.

OpenAI positions them as alternatives to rigid classifiers: they permit iterative policy changes without retraining. Limitations noted include higher compute/latency and that traditional classifiers may still win in ultra-high precision contexts.

#
OpenAI
Models
October 28, 2025

The next chapter of the Microsoft-OpenAI partnership

OpenAI and Microsoft signed a definitive agreement: Microsoft now holds approx. US $135 billion in OpenAI Group PBC (~27% stake). The deal extends IP rights to 2032 and adds independent AGI verification.
Expand

OpenAI and Microsoft have entered a new phase of their strategic partnership, marked by a definitive agreement that values Microsoft’s investment at around US $135 billion (~27% stake in OpenAI Group PBC).

The deal preserves Microsoft’s exclusive Azure API rights for frontier models until AGI is declared, while adding fresh terms: any AGI declaration by OpenAI must be verified by an independent expert panel.

Microsoft’s IP rights now extend through 2032 and include post-AGI models. OpenAI can now freely partner beyond Microsoft for compute and deployment, signalling a more open ecosystem.

#
OpenAI
Models
October 28, 2025

PayPal partners with OpenAI to enable ChatGPT payments

PayPal and OpenAI have formed a partnership allowing users to make direct payments through ChatGPT, integrating PayPal’s wallet and merchant network into OpenAI’s conversational commerce ecosystem.
Expand

PayPal announced a strategic collaboration with OpenAI to integrate its digital wallet directly into ChatGPT. This integration enables users to make purchases and payments seamlessly through conversational interactions within the platform.

The partnership also connects PayPal’s merchant ecosystem with OpenAI’s Instant Checkout and agentic commerce features, allowing businesses to sell directly through ChatGPT.

Following the announcement, PayPal raised its 2025 earnings forecast and introduced its first quarterly dividend, signaling confidence in its AI-driven growth strategy.

#
OpenAI
Models
October 28, 2025

Advancing Claude for financial services

Anthropic expands its “Claude for Financial Services” offering with a beta Excel add-in, real-time market data connectors, and new pre-built financial modelling agent-skills for enterprise users.
Expand

Anthropic has upgraded its Claude AI platform for the financial-services sector by introducing a research-preview Excel sidebar add-in that reads, edits, and builds spreadsheets with full audit transparency.

It’s also added numerous live-data connectors (e.g., market pricing, earnings call transcripts, document-room search) and six new pre-built “agent skills” covering tasks like discounted-cash-flow models, comparable-company analysis, due-diligence data-packs and initiating coverage reports.

These features are initially available for Max, Enterprise and Teams users and aim to accelerate modelling, research and workflow automation across finance domains.

#
OpenAI
Models
October 27, 2025

Addendum to GPT-5 System Card: Sensitive conversations

OpenAI updated GPT-5 to better handle sensitive conversations by routing these to a specialized version, collaborating with 170+ mental-health experts, and reducing unsafe responses by 65-80%.
Expand

In this addendum, OpenAI explains that GPT-5 has been enhanced to respond more safely and thoughtfully during emotionally fraught or distressing conversations. The update (launched October 3) benefitted from collaboration with more than 170 clinicians and mental-health experts, allowing GPT-5 to more reliably detect signs of distress (e.g., psychosis, mania, self-harm risk), de-escalate conversations and direct users toward real-world professional help.

The company reports that the proportion of responses falling short of its safety expectations dropped by 65-80 % compared to the prior version.

Additional measures include routing sensitive chats to a reasoning-capable model, expanding access to crisis hotlines, and adding reminders for long sessions.

#
OpenAI