Will Microsoft Build Frontier Models to Reduce Reliance on OpenAI?

Salesforce CEO Marc Benioff believes it will do so and might not use OpenAI anymore. 
Sam Altman, Greg Brockman join Microsoft
Illustration by Nikhil Kumar

A few days ago, Microsoft announced an “evolution” of their partnership with OpenAI, which suggests a notable shift in their collaboration dynamics. OpenAI has been using Microsoft’s cloud services exclusively, but now it has the option to choose from other cloud platforms. Microsoft now has the right of first refusal (ROFR) when OpenAI requests more computing resources from the company.

“This new agreement…includes changes to the exclusivity on new capacity, moving to a model where Microsoft has a right of first refusal (ROFR),” the announcement stated. 

Notably, there has been speculation for quite some time that the relationship between these two companies has deteriorated. 

What was surprising is that Salesforce CEO Marc Benioff, on a rare occasion, made a positive prediction about Microsoft. “It’s extremely important that OpenAI gets to other platforms quickly because Microsoft is building their own AI,” he said. 

Benioff also speculated that Microsoft would have its own frontier models and said, “I don’t think they will use OpenAI in the future.”

While Microsoft has not officially announced any plans to build a large frontier model, certain signs could suggest such a possibility.

Microsoft Will Have to Take a Phi-4 Approach

The latest in Microsoft’s Phi series, the Phi-4, packs in 14B parameters. As per the benchmarks, the Phi-4 outperformed the Llama 3.3 70B and OpenAI’s GPT-4o in several benchmarks. 

Not only did this indicate impressive performance on a small model, but Microsoft also solved a crucial problem that many AI researchers are discussing: data scarcity. The model relies mostly on high-quality synthetic data to achieve success. Unlike OpenAI, the company has also not used any inference optimisation techniques. 

This is significant progress in breaking the barrier of scaling laws. “Blindly scaling, like how people have been doing with trillion parameter models, isn’t just needed, right?” Harkirat Behl, member of technical staff at Microsoft AI, earlier told AIM

Benchmarks are another way to gauge a model’s performance. For a long time, leakage of benchmark data in the training corpus and overfitting the model’s performance to the test sets have led to unfair results. 

However, with Phi-4, this concern may have been eliminated in some ways. Microsoft tested the model on a benchmark developed after collecting all the training data. On these benchmarks, namely the November 2024 AMC 10/12 tests, it outperformed several competitors. 

Given that Phi-4 is open source, several developers were able to test its capabilities. The model hit 50,000 downloads on Ollama, an open-source AI model platform, in just three days and 40,000 on Hugging Face. 

Several developers were in awe of the model’s performance, especially in limited hardware conditions. 

Moreover, given the legacy of the company, and the capital, there’s no doubt about investing in GPU clusters to do so. 

Amidst the speculation, Microsoft CEO Satya Nadella said, “We’re not going to do things twice.” His statement indicates that the company will make the most of OpenAI’s existing models. 

“The more important thing for me is to build value on top of OpenAI. So we have a fantastic post-training stack,” he added. 

‘Mustafa Suleyman and Sam Altman Aren’t Best Friends’ 

According to Benioff, Microsoft hired its AI CEO Mustafa Suleyman from Google DeepMind to build frontier models. 

He added that “Suleyman and [OpenAI CEO] Sam Altman are not best friends”. 

Nadella, however, dismissed any signs of a deteriorating relationship. “Our partnership continues,” he said, adding that he believes that the new details are only going to be beneficial for both, especially in the context of The Stargate Project. 

For this project, big tech companies, including Oracle, Softbank, and OpenAI, are pooling $500 billion dollars to build data centres where OpenAI, among many other AI companies, will have access to unprecedented levels of resources. 

“Sam wants to continue with the scaling laws to build out more compute in order for him to train more models and we have a roofer. So he comes to us first,” Nadella said. 

“If we meet those needs, then we clear it. If not, he can go to these other providers. And so, I think it works out well for Sam and for us,” he added. 

Notably, Microsoft’s latest announcement about the partnership carried the ROFR detail at the very end. 

A majority of it mostly focused on the exclusivity of OpenAI IP to Azure, their revenue-sharing agreements. The announcement also stated that “OpenAI recently made a new, large Azure commitment that will continue to support all OpenAI products as well as training”.

However, the relationship carries multiple layers. Recently, Microsoft and OpenAI have reportedly agreed on a new, specific definition of artificial general intelligence (AGI). 

OpenAI can only achieve AGI when it has built a system that can generate $100 billion in profits. This is important because a clause in the agreement reportedly suggests that Microsoft will no longer have access to OpenAI’s models when the latter achieves AGI.

If the new definition of AGI is anything to go by, OpenAI is far from it. This year, OpenAI observed a $5 billion loss, with $3.7 billion in revenue. Reports also suggest that OpenAI will not turn into a profitable entity until 2029.

Moreover, OpenAI’s losses raised scepticism about how they could contribute to The Stargate Project. In a post on X, xAI founder Elon Musk claimed that OpenAI doesn’t actually have any money.

📣 Want to advertise in AIM? Book here

Picture of Supreeth Koundinya

Supreeth Koundinya

Supreeth is an engineering graduate who is curious about the world of artificial intelligence and loves to write stories on how it is solving problems and shaping the future of humanity.
Related Posts
Association of Data Scientists
GenAI Corporate Training Programs
Our Upcoming Conference
India's Biggest Conference on AI Startups
April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.