OpenAI is releasing its first artificial intelligence model powered by chips from Nvidia rival Cerebras Systems, expected to be a lighter version of its automated coding Codex model.

The update is part of a broader push by OpenAI to expand the pool of suppliers it uses to train and deploy AI models. It signed a $10 billion deal with Cerebras in January, one of several chip agreements struck over the past 12 months.

GPT-5.3 Codex Spark is designed for real-time, interactive coding rather than longer-running Codex tasks such as generating large volumes of code from a single prompt. Developers can use it for targeted edits, tweaks and refinements. It will reportedly be available to ChatGPT users on Thursday.

The release reflects a wider shift among foundation model providers toward more targeted, smaller models built for specific use cases. Both OpenAI and Anthropic have recently launched updates tailored to sectors such as finance and healthcare, alongside lighter versions of their flagship systems.

While ChatGPT remains the benchmark AI product in terms of usage, Anthropic and Google have been steadily gaining ground in the enterprise market. Anthropic has received strong reviews for its Claude Code tool, and the praise Google Gemini 3 received forced OpenAI to issue a code red on all development to compete.

The battle to chip away at Nvidia

Much of the AI industry remains reliant on Nvidia GPUs to train and deploy models, given their performance advantage. That dominance has helped make Nvidia the most valuable company in the world and the first to surpass a $5 trillion market capitalisation.

OpenAI has been trying to reduce that dependence through a series of partnerships with rivals. In addition to its $10 billion Cerebras deal, it has signed a six-gigawatt agreement with AMD to deploy Instinct AI GPUs beginning in the second half of 2026, and a multi-year partnership with Broadcom to co-develop custom AI accelerators and systems.

Despite these efforts, OpenAI remains closely tied to Nvidia. In September last year, it announced plans to deploy at least 10 gigawatts of Nvidia chips across its data centres. Nvidia, in turn, announced a $100 billion investment linked to that deployment, although CEO Jensen Huang later clarified that the figure was “never a commitment.”

OpenAI’s $1.4 trillion infrastructure spend

Questions have also been raised about OpenAI’s projected infrastructure spending over the next five years, with the company associated with a $1.4 trillion build-out. This comes despite generating around $13 billion in revenue last year, though CEO Sam Altman said the company reached a $20 billion annualised revenue run rate by the end of 2025.

A significant portion of the planned investment is tied to the Stargate project, a joint venture between OpenAI, SoftBank, Oracle, and investment firm MGX that aims to invest $500 billion in data centre development by 2029. Other companies, including Foxconn, have joined the initiative and are expected to supply components for the facilities.

Nvidia is likely to provide a substantial share of the chips for these data centres. However, OpenAI is clearly seeking to diversify its supply chain to avoid overreliance on Nvidia’s already constrained capacity.

Also read: TechRepublic breaks down how OpenAI’s Stargate initiative aims to power AI growth while keeping community energy bills in check.

Share.
Leave A Reply

Exit mobile version