- Chain of Thought
- Posts
- OpenAI releases two open reasoning models
OpenAI releases two open reasoning models
PLUS Google DeepMind introduces Genie 3, claims progress toward AGI
Quick Note
We have some big news on deck today. Both of today’s topics are new to us, so as always — if you have any questions or need clarification on anything in today’s newsletter, don’t hesitate to reply to this email.
OpenAI launches two open reasoning models

Generated by ChatGPT
OpenAI has released two “open-weight” reasoning models, which are designed to rival the performance of OpenAI’s most advanced models at a lower cost.
Key background: “Open-weight” refers to models whose weights* have been publicly released. This means that the trained models can be downloaded and run locally (ex. on a laptop rather than through ChatGPT).
What you should know:
OpenAI’s new models come in two sizes — the larger gpt-oss-120b can run on a single Nvidia GPU, while the lighter-weight gpt-oss-20b can run on consumer devices like laptops.
Both models come close to matching the performance of o3 and o4-mini (OpenAI’s most advanced models) across key benchmarks.
OpenAI is launching them under the Apache 2.0 license, which allows developers to create and monetize apps using gpt-oss without needing permission.
Why it matters: OpenAI has generally favored a closed source development approach, meaning it tends to sell access to its models rather than releasing them freely. Launching open-weight models will allow developers and startups to build new products and tools using OpenAI’s tech — a business model that has proven to be very successful for Chinese AI labs including DeepSeek and Alibaba.
What does weight mean in AI?
A “weight” is basically one of the many tiny numbers inside an AI model that helps it make decisions. Think of them as settings that control how much attention the model pays to different pieces of information as it processes prompts and data. A single AI model can have billions or even trillions of weights that work together to allow tools like ChatGPT to answer questions and complete tasks. When a company releases these weights to the public, it’s essentially sharing the final settings of the trained model, allowing other people to build on it without starting from scratch.
Google DeepMind claims new Genie 3 world model is a step toward AGI

Source: Google DeepMind
Google DeepMind has introduced Genie 3, a general-purpose world model that generates interactive 3D environments from text prompts and images.
Key background: World models are seen as a stepping stone toward AGI (AI that matches human-level intelligence) because they’re designed to have a deeper understanding of how the world works. Unlike current AI models that offer task-oriented capabilities (ex. writing an essay), world models aim to simulate physical aspects of the real world (ex. physics, weather, etc.).
What’s new:
Unlike previous Genie models that could only generate static environments, Genie 3 creates interactive spaces that users and AI agents can explore.
Genie 3 can also generate several minutes of coherent simulations, while its predecessor Genie 2 capped out at about 20 seconds.
Users can also modify each simulation using text prompts. This allows the model to add new objects, characters or weather to an environment in real time.
Why it matters: DeepMind’s vision is to use simulated 3D environments as training data for real-world AI agents. As we’ve talked about in recent weeks, current AI models rely heavily on pattern recognition. To reach human-level intelligence, AI will have to learn how to navigate situations it hasn’t seen before in the same way that humans do. World models present a virtual training ground where AI can learn to do that.
More trending news
Anthropic launches Claude Opus 4.1, an incremental upgrade to its most advanced model
Google brings AI coding agent “Jules” out of beta
President Trump plans to impose a 100% tariff on computer chips
Thanks for reading! My goal is to make these emails as easy to read as possible — if there’s anything I can do for you to achieve that goal, let me know!
See you next week,