By now, you must know that China’s latest AI model, DeepSeek-R1, has been the centre of all conversations for having built a SOTA model with scant resources with respect to compute and cost. It shattered the idea that building a model as capable as OpenAI’s on a $10 million budget is impossible—remember Sam Altman’s last visit to India?
With DeepSeek setting a precedence for everyone, India has gotten the boost it always needed.
India’s DeepSeek Ambitions
When US President Donald Trump announced Project Stargate a week ago, India got talking about building in the country. India building its own Project Stargate was portrayed as a necessity, with many tech leaders weighing in on the conversation.
The discussions had barely died down when DeepSeek brought forth the next idea of why India can’t build one just like it.
Well, India is already building it.
“Yes, we definitely are! It won’t be a 671B parameter one (to begin with), but it’ll be a frontier model in its parameter category,” said Abhishek Upperwal, founder and CEO of Soket AI Labs, in an exclusive interaction with AIM.
“The pace of development will depend on the kind of funds we get access to, but we are gonna definitely build it,” the founder of the Gurgaon-based AI research startup added.
Upperwal stated that Pragna-1B (Soket’s AI model) marks the team’s initial step toward developing frontier models. The 1.25 billion-parameter model was trained on a budget of just $100K, covering both synthetic data and compute costs.
“The plan is to bootstrap bigger models using smaller ones and any open-source model with a permissive license—while keeping compute costs dirt cheap,” he said.
He highlighted that high-quality data and training optimisations make this approach feasible, pointing to DeepSeek as a successful example.
Upperwal noted that if “less resources” translates to $2-3 million, the prospects for building frontier models are either bleak or significantly slow. In such a scenario, companies would have to prioritise revenue-generating products over AI model development.
“I think we need at least $10 million to start working on frontier tech, and this money should be purely dedicated to R&D for building these models—no distractions like building applications or even thinking about GTM. This is where investors and founders need to align with patient capital,” said Upperwal.
Similarly, Reliance-backed Indian AI startup TWO AI is building a cost-efficient multilingual AI model family with speech, search, and visual processing in 50+ languages. It believes it has already been building DeepSeek-like models.
“DeepSeek’s RL-only post-training approach and insights like distilling reasoning into smaller models really resonate with what we’re doing at TWO AI,” said Pranav Mistry, founder and CEO of TWO to AIM.
Mistry believes the AI race now demands rapid innovation rather than massive compute power. “Gone are the days when you needed a 20,000 GPU farm to train a single model,” he said.
He added that TWO AI has demonstrated this with its SUTRA model, which outperforms SOTA models in the official MMLU for Indian languages despite being trained on a $2 million budget.
While greater resources can accelerate innovation, optimised approaches are proving just as crucial. “Of course, more resources can help accelerate the speed at which we can innovate,” he added.
Pratyush Kumar, co-founder of Sarvam AI, another Indian AI startup that is developing LLMs and GenAI solutions for Indian languages, recently posted on X inviting Perplexity co-founder Aravind Srinivas to join their mission.
“Aravind, at SarvamAI we are building sovereign models that combine deep reasoning and Indic language skills. Would love to have you join this mission!” he wrote. However, when AIM reached out, Sarvam AI declined to comment on DeepSeek.
Multimodal AI platform Krutrim AI, started by Ola’s Bhavish Aggarwal, is also on a mission to cater to the Indian audience via their multilingual platforms.
What is Stopping India?
“Very soon, we will also have our own LLMs,” said IT minister Ashwini Vaishnaw, at the recent Utkarsh Odisha Conclave. “In the India AI compute facility, we have received compute bids for creating 18,000 GPUs,” he said.
While the government is slowly encouraging and providing incentives to promote AI in India, VCs are still sceptical about investing fully in it.
“The problem is that the benefit here isn’t immediate revenue generation, which is why VCs run away from these kinds of ventures. But the real ROI is in gaining the know-how of building intelligence at scale, which can create value in a hundred other ways (just imagine the kind of leverage DeepSeek holds today),” said Upperwal.
“Intelligence and the know-how to build one will be the most valuable IP in the future,” he added.
Upperwal believes that to reach DeepSeek R1’s level, we will need at least $50 million. “DeepSeek is already on its 3rd version, plus multiple other models. The cost to get here should be the aggregate of everything they’ve spent so far. I’d estimate $50-100 million,” he said.
He believes the key lies in securing adequate R&D funding (ranging from $5-10 million per startup) for at least 4-7 teams. “Sarvam is the only startup with access to such funds, but it’s splitting its focus between figuring out use cases and building models, which slows down progress,” he said.
In a blog post, Zerodha co-founder Kailash Nadh shared his views on DeepSeek, focussing on research and human capabilities as a priority.
Nadh believes that India’s AI sovereignty and future depends not on a narrow focus on LLMs or GPUs but on building a foundational ecosystem that encourages breakthroughs through a blend of scientific, social, and engineering expertise across academia, industry, and civil society.
“In fact, the bulk of any long-term AI sovereignty strategy must be a holistic education and research strategy. Without the overall quality and standard of higher education and research being upped significantly, it is going to be a perpetual game of second-guessing and catch-up,” he said.