OpenAI launches two ‘open’ AI reasoning models

OpenAI launches two ‘open’ AI reasoning models

OpenAI launches two ‘open’ AI reasoning models

OpenAI announced Tuesday the launch of two open-weight AI reasoning models with similar capabilities to its o-series. Both are freely available to download from the online developer platform Hugging Face, the company said, describing the models as “state of the art” when measured across several benchmarks for comparing open models.

The models come in two sizes: a larger and more capable gpt-oss-120b model that can run on a single Nvidia GPU, and a lighter-weight gpt-oss-20b model that can run on a consumer laptop with 16GB of memory.


Big News in AI!


OpenAI and open-source are now officially in the same sentence. It's happening. OpenAI is releasing open-weight reasoning models with state-of-the-art real-world performance, comparable to the O4 Mini, and — here's the kicker — they can run locally on your machine!


Open-Source AI from OpenAI


There are currently two models listed on Hugging Face:


  • 117B parameters — massive, powerful
  • 21B parameters — more lightweight, still impressive
    These are downloadable and runnable locally by anyone.


This is game-changing. For years, OpenAI has led with proprietary models. Now, they're joining the open-source movement — a space Meta previously dominated with LLaMA. But now? The game is changing.


The Age of Personal AI Assistants


Mark Zuckerberg was right in one of his recent talks:

“The future of AI is personal assistants — for everyone.”


And to achieve that, you can't rely solely on centralized cloud services. You need models that run locally — on your mobile, on your desktop — tailored to your needs. That's exactly what open-source models enable.


Both modal available under the flexible Apache 2.0 license, these models outperform similarly sized open models on reasoning tasks, demonstrate strong tool use capabilities, and are optimized for efficient deployment on consumer hardware. They were trained using a mix of reinforcement learning and techniques informed by OpenAI’s most advanced internal models, including o3 and other frontier systems.

The gpt-oss-120b model achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while running efficiently on a single 80 GB GPU. The gpt-oss-20b model delivers similar results to OpenAI o3‑mini on common benchmarks and can run on edge devices with just 16 GB of memory, making it ideal for on-device use cases, local inference, or rapid iteration without costly infrastructure. Both models also perform strongly on tool use, few-shot function calling, CoT reasoning (as seen in results on the Tau-Bench agentic evaluation suite) and HealthBench (even outperforming proprietary models like OpenAI o1 and GPT‑4o).


Let’s build. 💥

  • Share:

  • Share on X
Want Online Presence or Automation?
We Build Websites & Software