Microsoft – Analytics India Magazine https://analyticsindiamag.com AIM - News and Insights on AI, GCC, IT, and Tech Mon, 17 Mar 2025 19:15:31 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2025/02/cropped-AIM-Favicon-32x32.png Microsoft – Analytics India Magazine https://analyticsindiamag.com 32 32 But Why Did Microsoft Port TypeScript to Go Instead of Rust? https://analyticsindiamag.com/ai-features/but-why-did-microsoft-port-typescript-to-go-instead-of-rust/ Wed, 12 Mar 2025 07:55:48 +0000 https://analyticsindiamag.com/?p=10165914 “If you're coming from JavaScript, you're going to find a transition to Go a lot simpler than the transition to Rust.”]]>

Microsoft is all set to port the TypeScript compiler and toolset to Go, achieving 10x faster compile speed across different codebases. Though developers largely praised the announcement, some expressed disappointment because Microsoft chose Go instead of Rust to port the TypeScript compiler. 

A user on X summed up the overall sentiment perfectly. “More shocking than TypeScript getting 10x speedup is they didn’t write it in Rust,” he said. 

“In a blink of an eye, Java vs C# debates have turned into Rust vs Go debates. Special thanks to TypeScript for making this happen,” another said

Microsoft is rewritting TypeScript compiler in… Go.
byu/rodrigocfd inrustjerk

As the displeasure poured in, Ryan Cavanaugh, a lead developer of TypeScript, clarified the stance, admitting that they had anticipated a debate over this. He said that while Rust was considered an option, the ‘key constraint’ was portability, which ensured that the new codebase was algorithmically similar to the current one. 

He also revealed that multiple ways were explored to represent the code so that rewriting it in Rust would be manageable. But they ran into ‘unacceptable’ trade-offs with performance, and ergonomics. Some approaches required implementing their own garbage collector (GC) and adding additional complexity. 

This was in contrast with Go, where it automatically recycled memory, or what is called ‘garbage collection’. “Some of them came close, but often required dropping into lots of unsafe code, and there just didn’t seem to be many combinations of primitives in Rust that allow for an ergonomic port of JavaScript code,” said Cavanaugh. 

He explained that the team ended up with two options. One was to do a rewrite from scratch using Rust, which he said could take ‘years’ and still yield an incompatible version of TypeScript ‘no one could use’. Second, build a usable port in Go within a year, which is ‘extremely’ compatible in terms of semantics, while offering competitive performance. 

Cavanugh also indicated that Go, like Rust, has excellent code generation, data representation capabilities, and ‘excellent’ concurrency primitives. 

The same applies to Go, he explained, but given the way they had written the code so far, it turned out to be a surprisingly good fit for the task.

“We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code,” he added in a post on GitHub

‘Transition to Go is a Lot Simpler than Transition to Rust’

In an interview, Anders Hejlsberg, the lead architect of TypeScript, largely reiterated Cavanaugh’s remarks. 

He said the only way the project would be meaningful is porting the existing codebase as is. The original codebase was designed with certain assumptions – and the most important one was the presence of automatic garbage collection. 

“I think [that] pretty much limited our choices, and started to heavily rule out Rust,” said Hejlsberg, indicating the lack of automatic memory management. 

Another challenge with Rust, as pointed out by Hejlsberg, is its strict limitations around cyclic data structures which the TypeScript compiler heavily relies on. The system includes abstract syntax trees (ASTs) with parent and child references, symbols and declarations that reference each other, and recursive types that naturally form cycles. 

It is important to note that TypeScript is built on top of JavaScript. “If you’re coming from JavaScript, you’re going to find a transition to Go a lot simpler than the transition to Rust,” said Hejlsberg. 

He also said that the transition is super gentle on the system, and isn’t a “super complicated” language with an awful lot of ceremony. “Which I would say Rust comes a lot closer to,” he added. 

]]>
Microsoft Ports TypeScript to Go with 10x Speed Boost https://analyticsindiamag.com/ai-news-updates/microsoft-ports-typescript-to-go-with-10x-speed-boost/ Tue, 11 Mar 2025 18:11:26 +0000 https://analyticsindiamag.com/?p=10165862 The native implementation will drastically improve editor startup, reduce most build times by 10x, and substantially reduce memory usage.]]>

Microsoft on Tuesday announced a project to enhance TypeScript performance by porting its compiler and language tools to Go. 

The project, titled ‘Corsa’ promises a 10x speed boost for developers, along with a ‘substantial’ reduction in memory usage. 

Anders Hejlsberg, the lead architect of TypeScript, and a technical fellow at Microsoft, took to a blog post, and a YouTube video to announce the same. 

The TypeScript compiler and toolset have been ported to Go, through a direct, file-by-file and function-by-function translation from the original codebase. 

The decision to port the TypeScript compiler and toolset to Go was to overcome JavaScript’s (JS) performance limits. Despite TypeScript’s success over the past decade, its self-hosted JavaScript implementation struggled with issues like slow compile times, and out-of-memory errors, as indicated by Hejlsberg. 

“We’ve likely reached the limit of what we can squeeze out of JavaScript,” he said. 

Microsoft expects to be able to preview a native implementation of the TypeScript compiler, capable of command-line typechecking by mid 2025, with a feature-complete solution for project builds and a language service by the end of the year.

This project showed over 10x faster compile time across several codebases. For example, compiling Visual Studio Code’s 1.5 million lines of code drops from 77.8 seconds to 7.5 seconds. 

“This native port will be able to provide instant, comprehensive error listings across an entire project, support more advanced refactorings, and enable deeper insights that were previously too expensive to compute,” read a section of the blog post. 

Source: Microsoft

Microsoft also said that overall memory usage also ‘appeared to be roughly’ half of the current investigation. 

A few developers also questioned why Microsoft chose Go, over other programming languages like Rust. Matt Pocock, a TypeScript expert, wrote, “Far and away the most important reason was its [Go’s] structural similarity to the current JavaScript implementation. Go’s programming patterns closely resemble TypeScript’s existing code structure.” 

“This means that contributors familiar with the existing codebase will be able to navigate the codebase easier,” he added. 

TypeScript 5.8 will be updated to 5.9, and development will continue to the 6.0 series. Project Corsa will be released as TypeScript 7.0. The JS based codebase will be maintained along with the development of the new project – ‘until TypeScript 7+ reaches sufficient maturity and adoption.’

Developers are already in awe of the project. “I cannot think of a bigger impact project in software,” said a user on X. 

Microsoft is also inviting developers for an ‘Ask me Anything’ (AMA) session in the TypeScript community Discord channel at 10 AM PDT on March 13th.

]]>
Nadella Takes a Swipe at OpenAI, Calls It a Product Company, Not a Model Company https://analyticsindiamag.com/global-tech/nadella-takes-a-swipe-at-openai-calls-it-a-product-company-not-a-model-company/ Tue, 11 Mar 2025 08:32:10 +0000 https://analyticsindiamag.com/?p=10165794 “We’re a full-stack systems company, and we want to have full-stack systems capability.”]]>

All may not be well between Microsoft and OpenAI. A new report suggests that Microsoft is building its own AI model to rival OpenAI. 

In a recent podcast, Microsoft chief Satya Nadella said that Microsoft doesn’t need to build an LLM just to “prove a point”. When asked why Microsoft still hasn’t built its own foundational models, he said that Microsoft sees itself as a full-stack company and LLMs are just a part of it.

“We’re a full-stack systems company, and we want to have full-stack systems capability,” he said, adding that the company focuses on integrating models into broader systems and products.

He added that Microsoft has a “long-term stable relationship” with OpenAI and retains important IP rights through their partnership, and the company has built systems, tools, and products around OpenAI’s models rather than just relying on the models themselves. “I do believe the models are becoming commoditised, and in fact, OpenAI is not a model company, it is a product company,” he said.

To some extent, Nadella is right. He is worried that OpenAI, by releasing products, is creating direct competition. That explains why, whenever OpenAI releases new products, Microsoft integrates them into Copilot. For instance, the Copilot app recently announced unlimited access to Voice and Think Deeper, powered by OpenAI’s o1 model.

OpenAI’s revenue for 2024 was approximately $3.7 billion, with significant growth projected for 2025, reaching around $11.6 billion. Meanwhile, Microsoft recently informed shareholders that it is generating over $13 billion in annualised AI revenue.

Mustafa x Microsoft 

Nadella asserted that under Microsoft’s AI chief Mustafa Suleyman’s leadership, Microsoft has built the Phi models. Since the tech giant has been trying to be less dependent on OpenAI, it recently launched Phi-4-multimodal and Phi-4-mini, the latest additions to its Phi family of small language models (SLMs). 

The Phi-4 multimodal model supports applications such as document analysis and speech recognition. On multimodal audio and visual benchmarks, it surpasses Google Gemini 2 Flash and Gemini 1.5 Pro. Microsoft claims that it is comparable to OpenAI’s GPT-4o.

Suleyman reportedly clashed with OpenAI leadership over access to technical details, prompting the company to explore alternatives and invest in in-house innovation.

Last fall, during a video call, Suleyman pressed OpenAI to share documentation on its o1 model’s “chain of thought” reasoning. OpenAI’s refusal sparked a heated exchange with senior leaders, including then CTO Mira Murati, before the call abruptly ended, the report said.

Fast forward to today, AI researchers in Suleyman’s team believe they have made significant progress on the second of their two key priorities. 

Under the leadership of Suleyman’s deputy, Karén Simonyan, the team has successfully trained a series of Microsoft models called MAI. These models reportedly perform at a level comparable to top models from OpenAI and Anthropic on widely recognised benchmarks. 

First reported last year, MAI—internally referred to as MAI-1 (possibly Microsoft AI-1)—is being developed by the company and is around 500 billion parameters in size.

The company is also training reasoning models that use chain-of-thought techniques to rival OpenAI’s offerings. Microsoft is considering releasing the MAI models later this year as an API, allowing developers to integrate them into external applications, a move that would position Microsoft in direct competition with OpenAI’s API services.

Salesforce chief Marc Benioff recently commented that OpenAI chief Sam Altman and Suleyman are not exactly “best friends”. Notably, Microsoft first offered the AI chief role to Altman following his dramatic firing from OpenAI in 2023.

Microsoft has Moved On

To further hedge its bets, Microsoft has begun testing models from OpenAI competitors, including Anthropic, xAI, DeepSeek, and Meta, as potential replacements for OpenAI in its Copilot tools, which are integrated into products like Windows and Edge.

Notably, Microsoft recently announced that distilled versions of the DeepSeek-R1 models, the 7 billion and 14 billion parameter variants, will be available on the Copilot+ PCs.

His comments come after recent reports surfaced that OpenAI is planning to shift its entire workload to Project Stargate, leaving Microsoft Azure. Not to forget that Microsoft is no longer OpenAI’s exclusive cloud partner. 

In a recent blog, OpenAI announced a new large-scale commitment to Azure, which will continue supporting all its products and model training. However, the agreement now allows for more flexibility. 

Instead of exclusivity, Microsoft has a right of first refusal on any new capacity OpenAI wants to add. This means Microsoft gets the first chance to match any other cloud provider’s offer before OpenAI can move forward with them.

]]>
Microsoft Lays Foundation for New India Development Centre in Noida, To Double Down on AI and Cloud Capabilities   https://analyticsindiamag.com/ai-news-updates/microsoft-lays-foundation-for-new-india-development-centre-in-noida-to-double-down-on-ai-and-cloud-capabilities/ Sat, 08 Mar 2025 11:34:50 +0000 https://analyticsindiamag.com/?p=10165700 The proposed campus in Noida’s Sector 145 will cover 15 acres with a built-up area of 1.1 million square feet. ]]>

Microsoft laid the foundation for its proposed India Development Centre (IDC) campus in Noida on Saturday, bolstering its vision for AI, cloud, and security research capabilities in India.

The proposed campus in Noida’s Sector 145 will cover 15 acres with a built-up area of 1.1 million square feet. Microsoft said this facility, once ready, will play a key role in strengthening India’s AI capabilities, fostering digital innovation, and supporting engineering talent.

Rajiv Kumar, Managing Director and President of Microsoft IDC highlighted Microsoft’s commitment to AI innovation, saying, “This proposed facility will attract top talent from India and the world and empower them to innovate across AI, cloud, and security, that will positively impact billions of lives across the planet.” 

He acknowledged the support from the Government of Uttar Pradesh and Noida Authority in facilitating the project, with Uttar Pradesh Chief Minister Yogi Adityanath gracing the occasion. 

“Hyderabad and Bengaluru have been instrumental in Microsoft IDC’s journey. We already have a strong presence in Noida, and the proposed campus adds a new dimension to our expanding footprint. It will help us double down on our capabilities in AI, cloud, and security and make the rapidly growing tech hub in North India stronger,” Kumar told AIM in an exclusive email conversation.

He added that AI applications in India go beyond automation and extend to healthcare, education, agriculture, and manufacturing. “AI in India isn’t just about automation; it’s about making healthcare more accessible, education more personalised, agriculture more efficient, and public infrastructure stronger,” he said.

Investment in India’s Digital Growth

Microsoft has committed a $3 billion investment in AI and cloud capacity in India. The company aims to equip 10 million Indians with AI skills by 2030, having already trained over 860,000 youth.

“India is at an inflection point in its digital journey, and AI is the next big accelerator,” Kumar said. “That’s why, in January, we announced a $3 billion investment in AI and cloud capacity—because India isn’t just adopting AI; it’s actively shaping the AI revolution.”

Collaborations with Startups and Academia

Kumar said that Microsoft is working with IndiaAI to establish AI Centers of Excellence and AI Productivity Labs across ten states to train educators. The company is also supporting Indian startups through its Founders Hub program and partnering with SaaSBoomi to scale the country’s SaaS ecosystem.

Microsoft announced it is partnering with IndiaAI to skill 500,000 individuals, establish AI Centres of Excellence, and collaborate with startups to turn bold ideas into reality. “This isn’t just about funding or technology access—it’s about mentorship, community, and creating the right environment for innovation to thrive,” Kumar said. 

]]>
Microsoft Eyes Salesforce Users with new AI Sales Agents in Copilot https://analyticsindiamag.com/ai-news-updates/microsoft-eyes-salesforce-users-with-new-ai-sales-agents-in-copilot/ Thu, 06 Mar 2025 08:21:30 +0000 https://analyticsindiamag.com/?p=10165346 These agents can operate within Microsoft 365 Copilot and Microsoft 365 Copilot Chat, integrating with Microsoft Dynamics 365 and Salesforce.]]>

Microsoft has announced two new AI sales agents for Microsoft 365 Copilot to help sales teams manage leads and close deals efficiently.

The newly introduced Sales Agent and Sales Chat will be available in public preview in May. According to Microsoft, these agents can operate within Microsoft 365 Copilot and Microsoft 365 Copilot Chat, integrating with Microsoft Dynamics 365 and Salesforce.

“They (agents) connect to both Microsoft Dynamics 365 and Salesforce, so sales reps can nurture and close deals without even opening their CRM,” the company said in its blog post.

Sales Agent autonomously manages leads, converting contacts into qualified prospects and ensuring that no potential deal is overlooked. It can research leads, schedule meetings, and engage customers, drawing on CRM data, company pricing sheets, and Microsoft 365 resources such as emails and meetings. In some cases, it can also complete low-impact sales independently.

Sales Chat accelerates the sales cycle by providing sales representatives with insights from CRM data, emails, pitch decks, and meetings. Users can prompt the AI for real-time data, such as identifying deals at risk or preparing for customer meetings.

“Our ambition is to empower every employee with a Copilot and transform every business process with agents,” said Jared Spataro, Chief Marketing Officer, AI at Work, Microsoft.

Microsoft said that nearly 70% of Fortune 500 companies are already using Copilot. Over the past quarter, businesses have built more than 400,000 custom agents using Microsoft Copilot Studio. 

Companies across industries, including Vodafone and Campari Group, have reported efficiency gains. Vodafone anticipates doubling or tripling its requests for proposals, while Campari Group has cut marketing campaign copy costs by 18%.

“We’ve seen tremendous growth since migrating from Salesforce to Dynamics 365 Sales. Our sales organisation has experienced an increase of 133% year-over-year revenue per head with 111% year-over-year growth overall. When we layered on Microsoft 365 Copilot, sellers realised 30 minutes of time saved per day, while pipeline generation has increased by 20%,” said Richard Thompson, CEO of ANS.

Microsoft also announced the AI Accelerator for Sales program, starting April  2025. This initiative will provide AI migration support for businesses transitioning from legacy CRM systems and assist with seller adoption. Companies participating in the program will receive expert guidance on fine-tuning AI agents to their specific needs.

Meanwhile, Salesforce recently launched Agentforce 2dx, the latest version of its digital labour platform, to offer better autonomous AI agent capabilities. The update enables AI agents to operate proactively, working behind the scenes without continuous human oversight. 

Organisations can integrate these agents into existing data systems, business logic, and user interfaces to streamline workflows and automate business processes.

Agentforce 2dx moves beyond traditional user-initiated chat interactions, allowing AI agents to anticipate business needs and take action dynamically. This expansion supports efficiency and scalability in customer and employee workflows.

Agentforce 2dx will be available to all users in April 2025, with some features already rolling out.

]]>
DeepSeek R1 7B and 14B Distilled Models Available on Microsoft Copilot+ PCs https://analyticsindiamag.com/ai-news-updates/deepseek-r1-7b-and-14b-distilled-models-available-on-microsoft-copilot-pcs/ Tue, 04 Mar 2025 14:10:15 +0000 https://analyticsindiamag.com/?p=10165120 The models are available on the Azure AI Foundry – along with the DeepSeek 1.5B distilled model announced last month.]]>

Microsoft on Monday announced that distilled versions of the DeepSeek-R1 models, the 7 billion and 14 billion parameter variants, will be available on the Copilot+ PCs. The models are available via Azure AI Foundry on Copilot+ PCs powered by Qualcomm Snapdragon X, followed by Intel Core Ultra 200V and AMD Ryzen hardware. 

“DeepSeek distilled models exemplify how even small pretrained models can shine with enhanced reasoning capabilities and when coupled with the NPUs on Copilot+ PCs, they unlock exciting new opportunities for innovation,” said Microsoft in the announcement. 

Microsoft Copilot+ PCs are devices capable of running AI models offline, as they’re equipped with a Neural Processing Unit (NPU). An NPU is a dedicated unit on an SoC (System on Chip) that performs all the calculations for AI-related tasks, and leaves CPUs and GPUs to handle other workloads. 

For manufacturers to ship Copilot+ PCs, Microsoft states a minimum of 16 GB of memory, 256GB SSD and an NPU capable of processing at least 40 TOPS, or trillion operations per second. 

Users can access all the available variants of the DeepSeek models by downloading the AI Toolkit VS Code extension. Microsoft also said that the DeepSeek-R1 14B model outputs 8 tokens per second, and aims to further optimise the performance. 

Last month, Microsoft also announced the availability of NPU optimised version of the DeepSeek-R1 1.5B distilled model on the Copilot+ PCs. 

Manufacturers like HP, Dell, Asus, Acer, and Lenovo are building AI PCs with Microsoft Copilot+ capabilities. Recently, a report from Canalys revealed that AI capable PC shipments reached 15.4 million in Q4 2024. 

“For the full year 2024, 17% of PCs shipped were AI-capable, with the biggest winners being Apple at 54% share, followed by Lenovo and HP at 12% share each,” read the report.

The research firm also added that Windows AI PC shipments grew 26% accounting for 15% of all Windows PCs shipped in Q4 2024. 

The enterprise sector is expected to be a major driver of AI PC adoption. A report from Gartner suggests that AI PCs will occupy 100% of the enterprise market by 2026. Moreover, the end of support for Windows 10 PCs in October 2025 will likely drive the adoption of newer, more capable AI PCs. 

]]>
Microsoft to Pull the Plug on Skype in May 2025, Teams Set to Take Over https://analyticsindiamag.com/ai-news-updates/microsoft-to-pull-the-plug-on-skype-in-may-2025-teams-set-to-take-over/ Fri, 28 Feb 2025 14:59:34 +0000 https://analyticsindiamag.com/?p=10164907 Users will still have access to core Skype features in Teams, including one-on-one calls, group calls, messaging, and file sharing.]]>

Microsoft has announced that Skype will be retired in May 2025 as the company shifts its focus to Microsoft Teams. The move is intended to streamline its consumer communication services and adapt to user needs.

“We will be retiring Skype in May 2025 to focus on Microsoft Teams (free), our modern communications and collaboration hub,” said Jeff Teper, president of collaborative apps and platforms at Microsoft.

Users will still have access to core Skype features in Teams, including one-on-one calls, group calls, messaging, and file sharing. Additional features in Teams include hosting meetings, managing calendars, and building or joining communities. The company said that over the past two years, the number of minutes spent in meetings by consumer users of Teams has quadrupled, indicating growing adoption. 

“Hundreds of millions of people already use Teams as their hub for teamwork, helping them stay connected and engaged at work, school, and at home,” Teper added.

Microsoft is offering Skype users two options during the transition period. Skype users will be able to sign into Teams using their Skype credentials. Chats and contacts will automatically appear in the app. 

The rollout begins with users in the Teams and Skype Insider programs. Teams and Skype users will be able to call and message each other during the transition. Users who choose not to migrate to Teams can export their Skype data, including chats, contacts, and call history. Skype will remain available until May 5, 2025, allowing users time to make a decision.

To transition to Teams, users can download the application from the Microsoft Teams website and log in with their Skype credentials. Microsoft has also provided a step-by-step guide to assist users in making the switch.

Microsoft will discontinue paid Skype features for new customers, including Skype Credit and subscriptions for international and domestic calls. Existing subscription users can continue using their Skype Credits and subscriptions until the end of their next renewal period. After May 5, 2025, the Skype Dial Pad will remain available on the Skype web portal and within Teams for the remaining paid users.

Microsoft acknowledged Skype’s role in shaping modern communication. “Skype has been an integral part of shaping modern communications and supporting countless meaningful moments, and we are honored to have been part of the journey,” Teper said.

]]>
Microsoft Launches Phi-4 multimodal and Phi-4-mini, Matches OpenAI’s GPT-4o https://analyticsindiamag.com/ai-news-updates/microsoft-launches-phi-4-multimodal-and-phi-4-mini-matches-openais-gpt-4o/ Thu, 27 Feb 2025 03:41:40 +0000 https://analyticsindiamag.com/?p=10164714 The Phi-4 multimodal model supports applications including document analysis and speech recognition.]]>

Microsoft has launched Phi-4-multimodal and Phi-4-mini, the latest additions to its Phi family of small language models (SLMs). These models are now available on Azure AI Foundry, Hugging Face, and the NVIDIA API Catalog.

Phi-4-multimodal is a 5.6 billion-parameter model that integrates speech, vision, and text processing. “By leveraging advanced cross-modal learning techniques, this model enables more natural and context-aware interactions, allowing devices to understand and reason across multiple input modalities simultaneously,” said Weizhu Chen, vice president of generative AI at Microsoft. 

Last year, Microsoft launched  phi-4, with 14 billion parameters. The model excels at complex reasoning capabilities.

The Phi-4 multimodal model supports applications including document analysis and speech recognition. On multimodal audio and visual benchmarks, it surpasses Google Gemini 2 Flash and Gemini 1.5 Pro. Microsoft claims that it is comparable to OpenAI’s GPT-4o. 

The company said it has demonstrated strong performance in speech-related tasks, surpassing models such as WhisperV3 and SeamlessM4T-v2-Large in automatic speech recognition and speech translation. It also ranks first on the Hugging Face OpenASR leaderboard with a word error rate of 6.14%. The model shows competitive results in document and chart understanding, Optical Character Recognition (OCR), and visual science reasoning.

On the other hand, Phi-4-mini is a 3.8 billion-parameter text-based model for reasoning, coding, and long-context tasks. It supports sequences of up to 128,000 tokens and offers efficient processing with reduced computational requirements. It supports function calling, allowing integration with external tools and APIs. 

Both of the models are suitable for deployment in constrained computing environments. They can be optimised using ONNX Runtime for cross-platform availability and lower latency. 

Microsoft is incorporating these models into its ecosystem, including Windows applications and Copilot+ PCs. “Copilot+ PCs will build upon Phi-4-multimodal’s capabilities, delivering the power of Microsoft’s advanced SLMs without the energy drain,” said Vivek Pradeep, vice president and distinguished engineer of Windows Applied Sciences.

Developers can access Phi-4-multimodal and Phi-4-mini on multiple platforms and explore their applications in various industries, including finance, healthcare, and automotive technology.

]]>
Microsoft Gives Free and Unlimited Access to Think Deeper and Voice Mode in Copilot https://analyticsindiamag.com/ai-news-updates/microsoft-gives-free-and-unlimited-access-to-think-deeper-and-voice-mode-in-copilot/ Wed, 26 Feb 2025 05:29:23 +0000 https://analyticsindiamag.com/?p=10164629 Based on OpenAI’s o1 model, the features no longer have daily rate limits, but the only thing that might hold users back is the server capacity. ]]>

Microsoft on Tuesday announced free and unlimited access to its Voice Mode and the Think Deeper features in Copilot. Both these features are powered by OpenAI’s o1 reasoning model. 

The voice mode allows users to interact with Copilot using their voices, whereas the Think Deeper feature helps “tackle complex topics” with comprehensive outputs. The latter is analogous to the deep research features that most AI models are offering of late. 

“Our long-held belief has been that when AI is truly democratised, we’ll be able to better empower people to harness its power,” said Yusuf Mehdi, executive VP and CMO at Microsoft. 

“We’re excited to continue to meet the demand we’re seeing and enabling people to have extended conversations, take advantage of our advanced reasoning models and iterate with Copilot,” he added. 

However, Mehdi also said that while the feature doesn’t have any daily rate limits, the only thing that might slow users down is the server capacity. “We’re actively scaling up to handle the demand,” he added. 

Copilot Pro users will continue to have access to the latest models during peak usage, experimental AI features, and the ability to use Copilot in Microsoft 365 apps. As of October last year, Microsoft Copilot had 28 million active users across all platforms. 

Meanwhile, the company has reportedly cancelled leases for a significant data centre capacity in the US. This raises concerns about the long-term sustainability of AI infrastructure investments. Microsoft may be reassessing its AI computing needs, even as it pledges to spend $80 billion this fiscal year on compute infrastructure. 

]]>
Gaming Data is to Microsoft What YouTube is to Google: Nadella https://analyticsindiamag.com/global-tech/gaming-data-is-to-microsoft-what-youtube-is-to-google-nadella/ Mon, 24 Feb 2025 13:05:53 +0000 https://analyticsindiamag.com/?p=10164489 “It could serve as both a general action model and a world model.”]]>

Microsoft recently launched Muse, a generative AI model for gameplay ideation. Built on the World and Human Action Model (WHAM), the model can generate game visuals, controller actions, or both.

In a recent interview with Dwarkesh Patel, Microsoft chief Satya Nadella expressed his excitement about the future of gaming. He revealed that Microsoft will soon have a catalogue of games wherein AI models will play a key role. 

He explained that these models will either be trained to generate game content or will be directly integrated into gameplay. “We’re going to have a catalogue of games soon that we will start using these models for, or we’re going to train these models to generate and then start playing them,” he said.

When Xbox chief Phil Spencer first demoed Muse for him, Nadella saw the model take inputs from an Xbox controller and generate outputs that perfectly matched the game. “It was a massive moment of ‘wow’. It’s kind of like the first time we saw ChatGPT complete sentences, or DALL-E draw, or Sora (sic).”

Nadella shared an interesting fact about Microsoft’s history, mentioning that the company developed a game before even creating Windows. 

“Flight Simulator was a Microsoft product long before we even built Windows. Gaming has a long history at the company, and we want to be in gaming for gaming’s sake,” he said.

Nadella described gaming data as more than just a resource for the gaming industry, suggesting it could serve as both a general action model and a world model. “It’s fantastic,” he said, drawing a comparison: “I think about gaming data as perhaps, you know, what YouTube is to Google, gaming data is to Microsoft.” 

Google uses YouTube data to train its models, such as its video generation model  Veo 2. 

On the other hand, Muse was trained on human gameplay data from Bleeding Edge, a 4v4 online game by Ninja Theory. The dataset includes visuals and controller actions recorded with user consent. The model has been trained on over 1 billion images and actions, representing over seven years of continuous gameplay.

AI x Gaming 

On October 13, 2023, Microsoft acquired Activision Blizzard for $75.4 billion.  The acquisition included franchises such as Call of Duty, Warcraft, Diablo, Overwatch, and Candy Crush. In the recent quarter, Xbox content and services reported a 2% increase in revenue.

Speaking about Microsoft’s substantial investment in gaming, Nadella said cloud gaming is a natural area to invest in as it expands the TAM and allows people to play games everywhere. He added that the combination of AI and gaming could be the ‘CGI moment’ for the gaming industry. 

Reportedly, Microsoft is not the only one betting on AI gaming. Elon Musk’s xAI has similar plans. While introducing xAI’s latest model, Grok-3, Musk said, “We’re launching an AI gaming studio at xAI. If you’re interested in joining us and building AI games, please join xAI.”

Since the launch of Grok-3, many developers have used it to create games.  

“Grok-3 is an incredible AI coding assistant. After a few hours and over 1,000 lines of generated code, I now have a fully functional 2D vertical jumping game,” said Alvaro Cintas-Canto, assistant professor of cybersecurity at Marymount University. He added that the game features different heroes, monsters, platforms, difficulty levels, and lives.

“Grok 3 is so good at programming that it makes game creation feel more like an art project. I generated this themed, endless-runner arcade game in < 30 mins,” said Mickey Friedman, co-founder of Flair.ai.

Besides xAI, Google DeepMind last year introduced Genie 2, a large-scale foundation world model capable of generating diverse playable 3D environments.

It enables the development of embodied AI agents by transforming a single image into interactive virtual worlds, which can be explored by humans or AI using standard keyboard and mouse controls. “The world model is taking shape,” said Google DeepMind chief Demis Hassabis while announcing it. 

Building on this trend, Decart AI launched Oasis, the world’s first real-time, generative AI-based playable world model. This fully interactive game generates each frame through a Transformer model that responds instantly to keyboard and mouse inputs, simulating physics, game mechanics, and graphics.

Meanwhile, Netflix recently appointed Mike Verdu as the VP of GenAI for games. “I am working on driving a once-in-a-generation inflexion point for game development and player experiences using generative AI. This transformational technology will accelerate the velocity of development and unlock truly novel game experiences that will surprise, delight, and inspire players,” said Verdu in a LinkedIn post.

]]>
Microsoft Rethinks Compute Needs, Cancels AI Data Centre Leases https://analyticsindiamag.com/ai-news-updates/microsoft-rethinks-compute-needs-cancels-ai-data-centre-leases/ Mon, 24 Feb 2025 11:53:18 +0000 https://analyticsindiamag.com/?p=10164484 OpenAI is planning to shift its workload from Microsoft to Project Stargate. ]]>

Microsoft has reportedly cancelled leases for a significant data centre capacity in the US, raising concerns about the long-term sustainability of AI infrastructure investments, according to Bloomberg, which cites analyst TD Cowen.

The report added that the tech giant cancelled agreements of “a couple of hundred megawatts” of capacity, citing insights from supply chain providers.

The decision suggests Microsoft may be reassessing its AI computing needs, even as it pledges to spend $80 billion this fiscal year on expanding computing capacity. 

The report, released on Friday, noted that Microsoft has also halted conversions of statement of qualifications, a step that typically leads to formal leasing agreements. The move has sparked speculation over whether Microsoft is adjusting its AI strategy due to potential overcapacity. 

On its January earnings call, CEO Satya Nadella said that the company must sustain high levels of spending to meet “exponentially more demand”. However, Wall Street has increasingly questioned the long-term viability of such investments, given the uncertain commercial applications of AI.

In response to the report, Microsoft reiterated its spending target but acknowledged some adjustments in infrastructure development. 

“While we may strategically pace or adjust our infrastructure in some areas, we will continue to grow strongly in all regions,” a company spokesperson said. “Our plans to spend over $80 billion on infrastructure this fiscal year remain on track as we continue to grow at a record pace to meet customer demand.”

At the same time, OpenAI appears to be exploring alternative computing options. In a recent report, The Information suggested that OpenAI is planning to shift its workload from Microsoft to Project Stargate. The report states that in recent weeks, OpenAI has informed investors that Stargate—a developing data centre expansion initiative, expected to receive substantial funding from SoftBank—could supply around 75% of the computing power needed to operate and refine its AI models by 2030.

Notably, Microsoft is no longer OpenAI’s exclusive cloud partner. In a recent blog, OpenAI announced a new large-scale commitment to Azure, which will continue supporting all its products and model training. However, the agreement now allows for more flexibility. 

Instead of exclusivity, Microsoft has a right of first refusal on any new capacity OpenAI wants to add. This means Microsoft gets the first chance to match any other cloud provider’s offer before OpenAI can move forward with them.

On the other hand, the emergence of cost-efficient AI models, such as the open-source model developed by Chinese company DeepSeek, has intensified scrutiny of major firms’ AI expenditures. 

DeepSeek claims its model rivals US technology at a fraction of the cost, raising questions about the financial sustainability of large-scale AI infrastructure investments.

]]>
Microsoft’s New Open Source Deep Learning Model Can Generate ‘Thousands of Protein Structures Per Hour’ https://analyticsindiamag.com/ai-news-updates/microsofts-new-open-source-deep-learning-model-can-generate-thousands-of-protein-structures-per-hour/ Fri, 21 Feb 2025 06:03:08 +0000 https://analyticsindiamag.com/?p=10164288 “We believe that BioEmu-1 is the first step towards generating the full ensemble of structures that a protein can take,” Microsoft said.]]>

Microsoft has unveiled a new deep-learning model called Biomolecular Emulator-1 (BioEmu-1). This model can “generate thousands of protein structures” hourly. The company has also released the model as open source. It is based on the preprint of a research study that was unveiled last December. 

Microsoft said the model offers superior computational efficiency compared to traditional molecular dynamics (MD) simulations, thereby opening the door to insights that have been out of reach until now.

“Predicting a single protein structure from its amino acid sequence is like looking at a single frame of a movie – it offers only a snapshot of a highly flexible molecule,” Microsoft said. The BioEmu-1 provides researchers and scientists with a comprehensive view of all the different structures each protein can adopt. 

“A deeper understanding of proteins enables us to design more effective drugs,” the tech giant added. 

By training and fine-tuning extensive datasets that help the model recognise, map, predict, and sample protein structures, Microsoft said the model is capable of making accurate predictions for proteins it has never seen before. 

“Our model offers a view on intermediate structures, which have never been experimentally observed, providing viable hypotheses about how this protein functions,” it further said.

BioEmu-1 accurately predicts MD equilibrium distributions using much less computational power. Microsoft compared 2D projections of the structural distribution of DE Shaw research’s simulation of Protein G and samples from BioEmu-1.

“BioEmu-1 reproduces the MD distribution accurately while requiring 10,000 to 1,00,000 times fewer GPU hours,” Microsoft said. 

In December last year, Google DeepMind open sourced the AlphaFold 3 model, making its training weights accessible to academic researchers and scientists for non-commercial use. 

Last year’s Nobel Prize in Chemistry was awarded to Demis Hassabis, CEO and co-founder of Google DeepMind, and John M Jumper for their contributions to protein structure prediction through AlphaFold. David Baker, a professor at the University of Washington, received the other half for computational protein design.

Microsoft is on a roll, and this announcement comes days after it unveiled the new Majorana 1 quantum chip. Satya Nadella, CEO of Microsoft, called Majorana 1 “a chip that can fit in the palm of your hand yet can solve problems that all computers on Earth today combined could not”.

The Majorana 1 uses a new ‘Topological Core’ architecture and can potentially hold one million qubits on a single chip, slightly larger than desktop computer CPUs.

]]>
True AGI Means 10% Economic Growth, Says Satya Nadella https://analyticsindiamag.com/ai-news-updates/true-agi-means-10-economic-growth-says-satya-nadella/ Thu, 20 Feb 2025 04:12:44 +0000 https://analyticsindiamag.com/?p=10164166 “Let’s have that Industrial Revolution type of growth,” Nadella said.]]>

Microsoft CEO Satya Nadella has challenged conventional definitions of artificial general intelligence (AGI), arguing that true AGI should be measured by economic growth rather than AI advancements alone. Nadella said that the focus should be on broader economic impact rather than self-proclaimed AI breakthroughs.

“If you’re going to have this explosion or abundance…of intelligence available, the first thing we have to observe is GDP growth,” said Nadella in a recent interview with Dwarkesh Patel. “Before I get to what Microsoft’s revenue will look like, there’s only one governor in all of this. This is where we get a little ahead of ourselves with all this AGI hype.”

He added that current economic growth rates in the developed world remain low, at around 2%, and inflation-adjusted figures would remain close to zero. According to Nadella, the real benchmark for AGI’s success should be global economic growth reaching 10%.

“Us self-claiming some AGI milestone, that’s just nonsensical benchmark hacking to me,” he said. “The real benchmark is the world GDP growing at 10%.”

Nadella said that the primary beneficiaries of AI advancements will not be tech companies but industries using AI to drive productivity.

“The big winners here are not going to be tech companies. The winners are going to be the broader industry that uses this commodity,” he said. “Suddenly, productivity goes up, and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.”

Drawing a comparison to past technological revolutions, Nadella said that AI should drive significant economic progress similar to the Industrial Revolution.

“When we say this is like the Industrial Revolution, let’s have that Industrial Revolution type of growth,” he said. “That means to me, 10%, 7%, developed world, inflation-adjusted, growing at 5%. That’s the real marker. It can’t just be supply-side.”

Microsoft recently introduced Majorana 1, the world’s first quantum chip using topological qubits, along with Muse, a generative AI model for gameplay ideation. The tech giant also made Azure AI Foundry Labs available – a hub where developers, startups, and enterprises can explore Microsoft’s latest AI research and innovations. It provides access to experimental AI models, agentic frameworks, and tools that accelerate the transition from research to real-world applications.

]]>
Microsoft Introduces Muse, a GenAI Model for Gameplay Ideation https://analyticsindiamag.com/ai-news-updates/microsoft-introduces-muse-a-genai-model-for-gameplay-ideation/ Wed, 19 Feb 2025 18:13:44 +0000 https://analyticsindiamag.com/?p=10164150 Muse was trained on human gameplay data from Bleeding Edge, a 4v4 online game by Ninja Theory.]]>

Microsoft has introduced Muse, a generative AI model designed for gameplay ideation. The model, built on the World and Human Action Model (WHAM), can generate game visuals, controller actions, or both.

The research, published in Nature, was developed by the Microsoft Research Game Intelligence and Teachable AI Experiences (Tai X) teams in collaboration with Xbox Game Studios’ Ninja Theory. 

The research aims to refine AI-generated gameplay for game development and interactive storytelling. Microsoft has open-sourced the model’s weights, sample data, and the WHAM Demonstrator, a concept prototype for interacting with WHAM models. These resources are available on Azure AI Foundry.

“I’m incredibly proud of our teams and the milestone we have achieved, not only by showing the rich structure of the game world that a model like Muse can learn but also by demonstrating how to develop research insights to support creative uses of generative AI models,” said Katja Hofmann, senior principal research manager at Microsoft Research. 

Muse was trained on human gameplay data from Bleeding Edge, a 4v4 online game by Ninja Theory. The dataset includes visuals and controller actions recorded with user consent. The model has been trained on over 1 billion images and actions, representing more than seven years of continuous gameplay.

The Game Intelligence and Teachable AI Experiences teams playing the Bleeding Edge game together.

Gavin Costello, technical director at Ninja Theory, said, “It’s been amazing to see the variety of ways Microsoft Research has used the Bleeding Edge environment and data to explore novel techniques in a rapidly moving AI industry.”

The research was motivated by the release of ChatGPT in 2022. Microsoft scaled the model’s training from a V100 GPU cluster to H100s, refining its representation of controller actions and images. Early versions struggled with consistency, but iterative training improved the model’s ability to predict accurate game dynamics.

Comparing Muse’s generated visuals with actual gameplay, researchers assessed key capabilities such as consistency, diversity, and persistency. Consistency measures whether generated sequences adhere to game dynamics. 

On the other hand, diversity evaluates how gameplay variations evolve from the same prompt. Persistency determines if introduced elements are maintained in subsequent sequences.

Cecily Morrison, senior principal research manager at Microsoft, highlighted the importance of involving game creators from the outset. “It was a great opportunity to join forces at this early stage to shape model capabilities to suit the needs of creatives right from the start, rather than try to retrofit an already developed technology.”

Meanwhile, xAI chief Elon Musk recently announced that the company is launching a game studio to reshape the gaming industry. While announcing xAI’s latest model, Grok-3, Musk said, “We’re launching an AI gaming studio at xAI. If you’re interested in joining us and building AI games, please join xAI.”

]]>
Microsoft Unveils Majorana 1, ‘World’s First’ Quantum Processor Powered by Topological Qubits https://analyticsindiamag.com/ai-news-updates/microsoft-unveils-majorana-1-worlds-first-quantum-processor-powered-by-topological-qubits/ Wed, 19 Feb 2025 17:13:04 +0000 https://analyticsindiamag.com/?p=10164144 “A chip that can fit in the palm of your hand yet can solve problems that all computers on Earth today combined could not!” says Satya Nadella.]]>

Microsoft announced the creation of the Majorana 1, claiming it as the world’s first quantum chip using topological qubits, on Wednesday. This represents a major step toward achieving practical quantum computing. 

The company expects that this new chip will allow quantum computers to solve industrial-scale problems in the near future.

Satya Nadella, chairman and CEO at Microsoft took to X to express his views on the breakthrough as well. 

He said, “They are 1/100th of a millimetre, meaning we now have a clear path to a million-qubit processor. Imagine a chip that can fit in the palm of your hand yet is capable of solving problems that even all the computers on Earth today combined could not!”

Microsoft has chosen to manufacture the Majorana 1 components in-house in the United States. The company has no plans to make the chip available to clients through Azure, unlike its Maia 100 AI chip.

The Majorana 1 uses a new ‘Topological Core’ architecture and can potentially hold one million qubits on a single chip, which is just slightly larger than desktop computer CPUs. 

Chetan Nayak, a technical fellow for quantum hardware at Microsoft expressed that the goal was to “invent the transistor for the quantum age.” He added, “The next step is a scalable architecture built around a single-qubit device called a tetron.”

In a recent discussion, Nayak expressed, “It’s gratifying to see something in nature that we have been thinking about for a long time and that people hypothesised for decades.”

This breakthrough, the result of nearly 20 years of research, uses a novel material called a ‘topoconductor’ or topological superconductor to control Majorana particles, leading to more reliable qubits. This material is of a special category that can create an entirely new state of matter – not a solid, liquid or gas but a topological state.

According to Microsoft, a quantum computer with one million qubits has the potential to address complex industrial and societal problems, such as breaking down microplastics or creating self-healing materials.

The Microsoft team behind the breakthrough explains behind the scenes of the processor:

In its official blog, the company mentioned the challenges it has faced over the years. Until recently, these exotic particles Microsoft sought to use (Majoranas) had never been seen or made. They don’t exist in nature and can only be coaxed into existence with magnetic fields and superconductors.

While companies like Google and IBM are also developing quantum processors, Microsoft’s Majorana 1 incorporates eight topological qubits using indium arsenide (a semiconductor) and aluminium (a superconductor).

Google’s recent quantum chip, Willow, took over the internet after its release for suggesting the possibility of a ‘multiverse’, even so, that it sparked a visionary exchange between Google CEO Sundar Pichai and SpaceX’s Elon Musk.

However, many critics questioned the tech giant’s bold claims, saying the tech giant’s claims were based on a flawed benchmark and that it has no real-world applications.

]]>
AI Use Reduces Critical Thinking Ability, Says New Microsoft Study https://analyticsindiamag.com/ai-news-updates/ai-use-reduces-critical-thinking-ability-says-new-microsoft-study/ Wed, 19 Feb 2025 15:59:43 +0000 https://analyticsindiamag.com/?p=10164140 The study indicates that professionals who are more confident in GenAI tend to think less critically during their tasks. ]]>

A new study reveals that while generative AI (GenAI) tools can significantly reduce workload, they also risk diminishing critical thinking skills among knowledge workers. 

The study was conducted jointly by researchers from the Microsoft research lab in Cambridge and Hao-Ping (Hank) Lee, a PhD student at the Human-Computer Interaction Institute at Carnegie Mellon University.

The researchers surveyed 319 professionals and analysed 936 real-world examples to understand the impact of AI tools like ChatGPT and Copilot on cognitive processes in the workplace.

This was targeted at professionals who use these tools at work at least once a week. The researchers said, “When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification, from problem-solving to AI response integration, and from task execution to task stewardship.”

The findings, presented at the CHI Conference on Human Factors in Computing Systems, indicate that professionals who are more confident in GenAI tend to think less critically during their tasks. 

This suggests a potential over-reliance on AI, hindering independent problem-solving.

“It’s a simple task, and I knew ChatGPT could do it without difficulty, so I just never thought about it, as critical thinking didn’t feel relevant,” noted one participant, highlighting this tendency to overestimate AI capabilities. 

Conversely, participants who were highly self-confident in their skills often perceived greater effort in tasks, particularly when evaluating and applying AI responses.

The research highlights a significant shift in how knowledge workers approach their responsibilities. 

Instead of focusing primarily on hands-on task execution, they are increasingly transitioning to overseeing AI-generated results, including verifying outputs for accuracy. 

This includes setting clear goals, refining prompts, and assessing AI-generated content to meet specific criteria.

A user stresses that with straightforward factual information, ChatGPT usually gives good answers, showing AI’s ability.

However, GenAI’s limitations and biases also require careful consideration. One participant noted that AI tends to make up information to agree with whatever points you are trying to make. Hence, the editing process could be time-consuming.

Additionally, a participant also said the AI output was too emphatic and did not fit the scientific style, and it needed to be rephrased. 

Based on these findings, researchers emphasise the importance of designing GenAI tools to support critical thinking. The study suggests addressing factors such as awareness of limitations, motivation for careful evaluation, and skill development in areas where AI might fall short.

]]>
Microsoft Opens New Hyderabad Campus, Partners with Telangana Govt for AI Growth https://analyticsindiamag.com/ai-news-updates/microsoft-opens-new-hyderabad-campus-partners-with-telangana-govt-for-ai-growth/ Thu, 13 Feb 2025 15:39:00 +0000 https://analyticsindiamag.com/?p=10163616 Microsoft has inaugurated a new 1.2 million-square-foot engineering facility in Hyderabad, which will house 3,000 engineers.]]>

Microsoft inaugurated a new 1.2 million-square-foot engineering facility in Hyderabad on Thursday. It will house 3,000 engineers. 

Speaking on the occasion, IT and industries minister Sridhar Babu highlighted Hyderabad’s rapid growth as a global technology hub. 

“Hyderabad is not just a city of today. It is the city of the future. With 52 research and development institutions, 30 universities, and a skilled workforce of six million, we are driving the next era of innovation,” he said. 

He further added that Microsoft’s expansion in Hyderabad marks a significant milestone. The company, which started with a small research and development centre, now has its largest hub outside the United States in the city. 

Babu further emphasised Hyderabad’s contributions to global technology, saying, “Hyderabad has not only nurtured technology talent but also shaped global leadership, giving the world Satya Nadella and redefining artificial intelligence and cloud innovation.”

The Telangana government is making major investments to solidify the city’s position in the technology sector. Babu revealed that the state is investing $15 billion in initiatives such as Future City, Artificial Intelligence City, Young India Skill University, and deep technology centres of excellence in quantum computing, AI, bioinformatics, and legal technology. 

Additionally, the government is working towards 100% digital connectivity for nine million households and the development of AI-driven data centres.

Looking ahead, Babu states that Hyderabad aims to go beyond being a Global Capability Centre (GCC) hub and transform it into a Global Value Centre (GVC). “The future of global technology is being built here,” Babu said, reinforcing Hyderabad’s growing influence in the world’s deep technology value chain.

A Memorandum of Understanding (MoU) was signed between Microsoft and the government of Telangana. This follows a January 2025 meeting between the Chief Minister and Microsoft CEO Satya Nadella, during which they agreed to collaborate on modernising government IT, cloud infrastructure, and accelerating AI adoption. 

Microsoft India Development Centre managing director and president Rajiv Kumar said the new building at the Microsoft IDC in Hyderabad will enable them to innovate and develop next-generation AI products. 

]]>
Microsoft Makes DeepSeek R1 Available on Azure and GitHub https://analyticsindiamag.com/ai-news-updates/microsoft-makes-deepseek-r1-available-on-azure-and-github/ Thu, 30 Jan 2025 03:43:51 +0000 https://analyticsindiamag.com/?p=10162492 Customers will soon be able to run DeepSeek R1’s distilled models locally on Copilot+ PCs.]]>

DeepSeek R1 has been added to the Azure AI Foundry and GitHub model catalogue, expanding the platform’s AI portfolio. The model is now accessible for businesses looking to integrate advanced AI solutions while maintaining security and reliability standards.

Microsoft announced on its official blog that DeepSeek R1 is available on its enterprise-ready Azure AI Foundry platform, which supports over 1,800 models. 

“Bringing models like DeepSeek R1 to Azure AI Foundry allows businesses to scale AI-powered applications with speed and security,” said Asha Sharma, corporate vice president of AI Platform at Microsoft.

“Customers will soon be able to run DeepSeek R1’s distilled models locally on Copilot+ PCs, as well as on the vast ecosystem of GPUs available on Windows. Beyond Copilot+ PCs, the most powerful AI workstation for local development is a Windows PC running WSL2, powered by NVIDIA RTX GPUs,” said Microsoft chief Satya Nadella during the recent earnings call on Wednesday.

He further added that DeepSeek has introduced real innovations, some of which even OpenAI discovered in o1. “Now, of course, those innovations are becoming commoditised and will be widely used,” he said.

According to DeepSeek, R1 is a cost-efficient AI model that enables developers to incorporate AI capabilities with minimal infrastructure investment. Azure AI Foundry provides built-in model evaluation tools, allowing users to test, benchmark, and deploy AI applications efficiently.

Microsoft emphasised its commitment to AI safety and compliance. DeepSeek R1 has undergone red teaming, security reviews, and automated behaviour assessments. Azure AI Content Safety includes built-in content filtering, with options for users to opt out. The Safety Evaluation System helps businesses test AI applications before deployment.

To use DeepSeek R1, developers can search for the model in the Azure AI Foundry catalogue, access the model card, and deploy it to obtain an inference API and key. Users can test the model in a playground environment before integrating it into applications.

DeepSeek R1 is also available on GitHub, where developers can find additional resources and integration guides. Microsoft stated that future versions of the model would be available in distilled formats for local deployment on Copilot+ PCs.

This follows Microsoft and OpenAI’s investigation into whether the Chinese AI startup used OpenAI’s output to train its model. A recent report states that OpenAI has found evidence suggesting DeepSeek used its proprietary models to develop an open-source competitor, raising concerns about a possible intellectual property breach.

]]>
Will Microsoft Build Frontier Models to Reduce Reliance on OpenAI? https://analyticsindiamag.com/ai-features/will-microsoft-build-frontier-models-to-reduce-reliance-on-openai/ Fri, 24 Jan 2025 12:59:48 +0000 https://analyticsindiamag.com/?p=10162181 Salesforce CEO Marc Benioff believes it will do so and might not use OpenAI anymore. ]]>

A few days ago, Microsoft announced an “evolution” of their partnership with OpenAI, which suggests a notable shift in their collaboration dynamics. OpenAI has been using Microsoft’s cloud services exclusively, but now it has the option to choose from other cloud platforms. Microsoft now has the right of first refusal (ROFR) when OpenAI requests more computing resources from the company.

“This new agreement…includes changes to the exclusivity on new capacity, moving to a model where Microsoft has a right of first refusal (ROFR),” the announcement stated. 

Notably, there has been speculation for quite some time that the relationship between these two companies has deteriorated. 

What was surprising is that Salesforce CEO Marc Benioff, on a rare occasion, made a positive prediction about Microsoft. “It’s extremely important that OpenAI gets to other platforms quickly because Microsoft is building their own AI,” he said. 

Benioff also speculated that Microsoft would have its own frontier models and said, “I don’t think they will use OpenAI in the future.”

While Microsoft has not officially announced any plans to build a large frontier model, certain signs could suggest such a possibility.

Microsoft Will Have to Take a Phi-4 Approach

The latest in Microsoft’s Phi series, the Phi-4, packs in 14B parameters. As per the benchmarks, the Phi-4 outperformed the Llama 3.3 70B and OpenAI’s GPT-4o in several benchmarks. 

Not only did this indicate impressive performance on a small model, but Microsoft also solved a crucial problem that many AI researchers are discussing: data scarcity. The model relies mostly on high-quality synthetic data to achieve success. Unlike OpenAI, the company has also not used any inference optimisation techniques. 

This is significant progress in breaking the barrier of scaling laws. “Blindly scaling, like how people have been doing with trillion parameter models, isn’t just needed, right?” Harkirat Behl, member of technical staff at Microsoft AI, earlier told AIM

Benchmarks are another way to gauge a model’s performance. For a long time, leakage of benchmark data in the training corpus and overfitting the model’s performance to the test sets have led to unfair results. 

However, with Phi-4, this concern may have been eliminated in some ways. Microsoft tested the model on a benchmark developed after collecting all the training data. On these benchmarks, namely the November 2024 AMC 10/12 tests, it outperformed several competitors. 

Given that Phi-4 is open source, several developers were able to test its capabilities. The model hit 50,000 downloads on Ollama, an open-source AI model platform, in just three days and 40,000 on Hugging Face. 

Several developers were in awe of the model’s performance, especially in limited hardware conditions. 

Moreover, given the legacy of the company, and the capital, there’s no doubt about investing in GPU clusters to do so. 

Amidst the speculation, Microsoft CEO Satya Nadella said, “We’re not going to do things twice.” His statement indicates that the company will make the most of OpenAI’s existing models. 

“The more important thing for me is to build value on top of OpenAI. So we have a fantastic post-training stack,” he added. 

‘Mustafa Suleyman and Sam Altman Aren’t Best Friends’ 

According to Benioff, Microsoft hired its AI CEO Mustafa Suleyman from Google DeepMind to build frontier models. 

He added that “Suleyman and [OpenAI CEO] Sam Altman are not best friends”. 

Nadella, however, dismissed any signs of a deteriorating relationship. “Our partnership continues,” he said, adding that he believes that the new details are only going to be beneficial for both, especially in the context of The Stargate Project. 

For this project, big tech companies, including Oracle, Softbank, and OpenAI, are pooling $500 billion dollars to build data centres where OpenAI, among many other AI companies, will have access to unprecedented levels of resources. 

“Sam wants to continue with the scaling laws to build out more compute in order for him to train more models and we have a roofer. So he comes to us first,” Nadella said. 

“If we meet those needs, then we clear it. If not, he can go to these other providers. And so, I think it works out well for Sam and for us,” he added. 

Notably, Microsoft’s latest announcement about the partnership carried the ROFR detail at the very end. 

A majority of it mostly focused on the exclusivity of OpenAI IP to Azure, their revenue-sharing agreements. The announcement also stated that “OpenAI recently made a new, large Azure commitment that will continue to support all OpenAI products as well as training”.

However, the relationship carries multiple layers. Recently, Microsoft and OpenAI have reportedly agreed on a new, specific definition of artificial general intelligence (AGI). 

OpenAI can only achieve AGI when it has built a system that can generate $100 billion in profits. This is important because a clause in the agreement reportedly suggests that Microsoft will no longer have access to OpenAI’s models when the latter achieves AGI.

If the new definition of AGI is anything to go by, OpenAI is far from it. This year, OpenAI observed a $5 billion loss, with $3.7 billion in revenue. Reports also suggest that OpenAI will not turn into a profitable entity until 2029.

Moreover, OpenAI’s losses raised scepticism about how they could contribute to The Stargate Project. In a post on X, xAI founder Elon Musk claimed that OpenAI doesn’t actually have any money.

]]>
Big Tech Rivalries Just Got Ugly https://analyticsindiamag.com/global-tech/big-tech-rivalries-just-got-ugly/ Sat, 18 Jan 2025 05:27:27 +0000 https://analyticsindiamag.com/?p=10161724 Elon Musk and Jeff Bezos are new best friends in town. ]]>

Silicon Valley, the epicentre of innovation, is now a bitter battleground of rising tensions and fierce rivalries among its tech leaders. Recent clashes between tech giants like Microsoft and Salesforce, Apple and Meta, as well as emerging startups like xAI and OpenAI, show a growing trend of hostility that could impact the industry in the long run.

The spite is clearly visible in the sharp critiques these tech bosses have been directing at their competitors.

In a recent podcast with Joe Rogan, Meta chief Mark Zuckerberg took a swipe at Apple, saying that the Cupertino giant hasn’t invented anything great in a while. “Steve Jobs invented the iPhone, and now they’re just sitting on it 20 years later. So, how are they making more money as a company? They do it by squeezing people,” he said. 

The decline in iPhone sales has provided fresh fodder for Zuckerberg. He argues that Apple compensates for this slump by imposing hefty fees—often referred to as the “Apple tax”—on app developers. This practice, he believes, stifles innovation and limits opportunities for smaller companies trying to compete in the app market.

Agentic Wars 

Meanwhile, the chiefs of Microsoft and Salesforce have been exchanging verbal volleys over agentic AI. Both companies compete for dominance in the field of autonomous AI agents, which are capable of performing tasks without human intervention.

Salesforce CEO Marc Benioff, in a recent podcast, slammed Microsoft Copilot. “This Copilot thing has been a huge disaster for them from a branding and validation standpoint. Customers don’t look at them and don’t take them seriously in AI,” Benioff said.

He was responding to Microsoft chief Satya Nadella, who also, in a podcast, indirectly took a dig at Salesforce by saying that traditional SaaS companies will collapse in the AI agent era. “I think the notion that business applications exist—that’s probably where they’ll all collapse in the agent era. Because if you think about it, they are essentially CRUD databases with a bunch of business logic,” he said.

The ‘Snowbricks’ Saga  

In the data warehousing domain, Databricks and Snowflake have emerged as arch-rivals. The former recently raised $10 billion in one of its largest funding rounds ever. The leading data and AI company is expected to go public this year, a move that may create unease for the AI cloud data company Snowflake. 

Databricks CEO Ali Ghodsi, however, believes the company is far ahead of Snowflake and refuses to acknowledge it as a competitor. “We had a program called Snow Melt to go after Snowflake, but that’s behind us now,” he said in a recent interview.

Another time, Ghodsi admitted that Snowflake no longer kept him up at night. “There was a time they would, but not anymore.” 

When it comes to numbers, Databricks expects to surpass a $3 billion annual revenue run rate by the end of its fourth quarter, which ends on January 31, 2025. The company reported over 60% revenue growth in the third quarter of 2024. Snowflake, on the other hand, expects a product revenue of $3.43 billion in 2025.

“I have no idea why he is so obsessed with Snowflake because I am not obsessed with Databricks,” Michael Scarpelli, CFO at Snowflake, said in an old interview about Ghodsi.

Musk and Sam’s Bromance 

Tensions also escalated significantly between tech bros Sam Altman, the CEO of OpenAI, and Elon Musk, the CEO of xAI, culminating in a series of public disputes and legal confrontations. Altman has openly criticised Musk, labelling him a “bully” in a recent interview. 

He also expressed frustration over Musk’s behaviour, noting that the latter often engaged in public quarrels with other prominent figures in the tech industry, including Bill Gates and Jeff Bezos. 

Altman believes that Musk’s issues with OpenAI stem from a desire for control over the organisation. He said, “Everything we’re doing, I believe Elon would be happy about if he were in control of OpenAI.”

Attacking right-back, Musk criticised OpenAI’s transition from a non-profit to a for-profit model. “OpenAI was funded as an open-source non-profit but has become a closed-source, profit-maximising entity,” he wrote on X.

New Step Brothers in Town

Surprisingly, Musk and Amazon founder Jeff Bezos, who have been engaged in fierce competition in the space sector through their companies SpaceX and Blue Origin, recently exchanged friendly messages on X, suggesting a potential thaw in their relationship.

Bezos celebrated the successful orbit of his New Glenn rocket on its maiden flight. Although the mission was largely successful, the booster intended for recovery was lost during re-entry.

“Congratulations on reaching orbit on the first attempt! Jeff Bezos,” Musk posted. He further lightened the mood by sharing GIFs from the movie Step Brothers, humorously suggesting that they might have just become “best friends.” 

On the same day, Musk’s SpaceX attempted to launch its Starship rocket, which unfortunately exploded shortly after takeoff. However, the Super Heavy booster successfully returned to Earth, a feat that drew praise from Bezos. “Kudos to you and the whole SpaceX team on the flawless booster catch! Very impressive,” Bezos said.

The escalating rivalries in Silicon Valley prove that the race for innovation often comes at the cost of collaboration and unity. However, there is still room for love.

]]>
Microsoft Unveils MatterGen, an AI Breakthrough for Materials Discovery https://analyticsindiamag.com/ai-news-updates/microsoft-unveils-mattergen-an-ai-breakthrough-for-materials-discovery/ Fri, 17 Jan 2025 06:56:15 +0000 https://analyticsindiamag.com/?p=10161603 Developers have released source code for MatterGen under the MIT license.]]>

Microsoft on Thursday launched MatterGen, a generative AI tool designed to revolutionise how we understand material discovery, marking a transformative moment in materials science. 

“Our MatterGen model applies generative AI to create new compounds with unprecedented precision,” said Satya Nadella, chairman and CEO at Microsoft. 

Unlike traditional approaches that test existing materials, MatterGen can generate entirely new ones based on specific requirements. The researchers detailed this breakthrough in their paper ‘A generative model for inorganic materials design.’

“MatterGen offers a paradigm shift,” said senior researchers Claudio Zeni, Robert Pinsler, Daniel Zügner, Andrew Fowler, and others.

Outperforms Screening Methods

Traditional methods reach a limit of 40 candidates when looking for materials with specific properties, such as high compression resistance. MatterGen, however, discovered over 100 potential candidates in tasks such as identifying stable, high-bulk modulus structures. 

The AI is also trained on extensive datasets, including the Materials Project and Alexandria databases, to ensure state-of-the-art performance. The model uses an innovative algorithm to handle complex material structures more accurately.

Screening vs generative approaches to materials design

MatterGen Created a New Material!

The tool’s capabilities were tested in collaboration with Prof. Li Wenjie’s team at the Shenzhen Institutes of Advanced Technology (SIAT) of the Chinese Academy of Sciences. 

They challenged MatterGen to design a material with specific compression resistance (200 GPa bulk modulus). The result? 

A new material called TaCr₂O₆ was successfully synthesised and matched the AI’s predictions, even accounting for variations in how the tantalum (Ta) and chromium (Cr) atoms were arranged.

Experimental validation of the proposed compound, TaCr2O6  

Open Access and Future Directions  

Christopher Stiles from the Johns Hopkins University Applied Physics Laboratory highlighted the significance of the innovation: “We are interested in understanding the impact that MatterGen could have on materials discovery.”  

MatterGen’s developers have released its source code under the MIT license, encouraging community collaboration. Researchers aim to expand the tool’s applications in fields such as battery and magnet development.  

The integration of MatterGen with AI simulation tools like MatterSim further accelerates material exploration and simulation, creating a dynamic system for scientific discovery. 

Materials discovery not only began with Microsoft, but Google DeepMind also released research titled ‘Scaling deep learning for material discovery’ in 2023, where they discovered 2.2 million new crystals, equivalent to 800 years of work of knowledge.

Meta also entered material size by releasing a massive data set called Open Materials 2024 (OMat24), which contained over 118 million examples of material simulations and structures. 

It focused on a wide range of inorganic bulk materials to improve AI-enabled material discovery.

In December last year, Amazon also announced a multi-year partnership with Orbital Materials to develop new materials that help decarbonise data centres using their ‘proprietary AI platform’.

]]>
Microsoft Releases AutoGen v0.4 with Major Updates to Multi-Agent AI Framework https://analyticsindiamag.com/ai-news-updates/microsoft-releases-autogen-v0-4-with-major-updates-to-multi-agent-ai-framework/ Wed, 15 Jan 2025 05:46:32 +0000 https://analyticsindiamag.com/?p=10161421 The new architecture addresses issues identified in earlier versions, including architectural constraints and limited debugging functionality. ]]>

Microsoft Research has announced the release of AutoGen v0.4, a redesigned library for agentic AI and multi-agent applications. This update introduces an asynchronous, event-driven architecture to address user feedback and enhance functionality, the company stated in its blog.

The new version includes features such as asynchronous messaging, modular components, improved debugging tools, and cross-language support. AutoGen v0.4 comprises three layers – core, agent chat, and first-party extensions.

The framework introduces upgraded developer tools, including AutoGen Bench for benchmarking agents and AutoGen Studio for rapid prototyping. AutoGen Studio offers capabilities like real-time agent updates, mid-execution control, and interactive feedback.

“This low-code interface enables rapid prototyping of AI agents,” the blog explained while talking about AutoGen Studio. To facilitate migration from previous versions, the AgentChat API maintains a similar level of abstraction as v0.2. Microsoft provides a migration guide for detailed assistance.

The new architecture addresses issues identified in earlier versions, including architectural constraints and limited debugging functionality. The asynchronous approach enables flexible multi-agent collaboration patterns and improved reusability of components.

AutoGen v0.4’s modular design allows users to customise systems with pluggable components, including custom agents, tools, memory, and models. The framework also supports the creation of proactive and long-running agents using event-driven patterns.

The update highlighted improved observability and control over agent interactions and workflows. Built-in metric tracking, message tracing, and debugging tools provide monitoring capabilities, along with support for OpenTelemetry for industry-standard observability.

Cross-language support is a notable addition, which enables interoperability between agents built in different programming languages. Currently, AutoGen v0.4 supports Python and .NET, with plans to expand to additional languages in the future.

Microsoft’s roadmap for AutoGen includes releasing .NET support and introducing built-in applications for challenging domains. The company encourages user engagement through AutoGen’s Discord server and GitHub repository.

“We remain committed to the responsible development of AutoGen and its evolving capabilities,” the blog post added. The release also introduced Magentic-One, described as “a new generalist multi-agent application to solve open-ended web and file-based tasks across various domains”.

]]>
Microsoft Confirms No Layoffs in India Despite Global Job Reductions https://analyticsindiamag.com/ai-news-updates/microsoft-confirms-no-layoffs-in-india-despite-global-job-reductions/ Mon, 13 Jan 2025 11:39:38 +0000 https://analyticsindiamag.com/?p=10161363 “More jobs are being created for India,” Puneet Chandok, president of Microsoft India and South Asia, said.]]>

Microsoft has clarified that there will be no layoffs in its India operations, even as it reduces its global workforce by less than 1% based on performance evaluations. According to a report by BusinessLine, Puneet Chandok, president of Microsoft India and South Asia, highlighted the company’s active projects and its focus on job creation in the country.

Microsoft’s India workforce comprises approximately 20,000 employees out of a global workforce of 2,28,000. Addressing the layoffs, Chandok reportedly stated, “No, not in India…We are engaged in so many projects. In fact, for all of India, more jobs are being created.”

Microsoft’s India Vision

CEO Satya Nadella recently announced Microsoft’s $3 billion investment to expand Azure’s infrastructure in India, marking the company’s largest investment in the country to date. This initiative aims to enhance AI and computing capabilities by adding new data centres and incorporating sustainable practices, such as liquid-cooled AI accelerators and zero-waste construction. 

Microsoft also plans to train 10 million people in AI by 2030 as part of its ADVANTA(I)GE INDIA initiative. This will strengthen India’s position as an AI and cloud hub.

During his visit, Nadella met with Prime Minister Narendra Modi to discuss India’s AI-driven future and praised the synergy between India’s demographics, entrepreneurial ecosystem, and AI ambitions. 

Nadella reaffirmed Microsoft’s commitment to fostering AI innovation and expanding its presence in the country while also highlighting a broader $80 billion global investment planned for FY 2025 to establish AI-enabled data centres.

India plays a key role in Microsoft’s global operations with 20,000 employees across 10 cities and has seen robust growth in its cloud and AI services. The company reported a 38.44% increase in net profit for its India business. Nadella also emphasised partnerships with Indian entities such as RailTel, Apollo Hospitals, and upGrad to accelerate AI adoption across sectors to further solidify Microsoft’s leadership in India’s AI ecosystem.

]]>
You Can Now Build AI Agents in Kannada, Says GitHub Chief Thomas Dohmke https://analyticsindiamag.com/ai-news-updates/you-can-now-build-ai-agents-in-kannada-says-github-chief-thomas-dohmke/ Mon, 13 Jan 2025 08:43:20 +0000 https://analyticsindiamag.com/?p=10161324 “Every person can now start programming, whether it’s in Hindi, Brazilian, or Portuguese, and bring back the joy of coding in their native language,” Microsoft CEO Satya Nadella said.]]>

Microsoft announced the removal of the waitlist for GitHub Copilot Workspace, an AI-assisted coding environment. “There is no more waitlist for GitHub Copilot Workspace – the most advanced agentic editor. Start building with agents today,” CEO Satya Nadella said.

At the Microsoft AI Tour in Bengaluru, Karan MV, director of international developer relations at GitHub, demonstrated that Copilot’s Workspace can understand Indian languages, including Hindi and Kannada, while writing code. 

GitHub chief Thomas Dohmke explained how Copilot Workspace will change software development in local languages. “As always, Karan MV demonstrated the power of Copilot Workspace – building with agents in Kannada. This is the way. Every developer will conduct a symphony of AI agents in natural language,” he said.

The feature was introduced in May last year. “Think about this – every person can now start programming, whether it’s in Hindi, Brazilian, or Portuguese, and bring back the joy of coding in their native language,” Nadella said.

GitHub recently announced that 17 million Indian developers use the platform, second only to the United States. However, in 2021, it was reported that a whopping 67% of engineers graduating from Indian colleges do not possess the necessary English-speaking skills. 

Meanwhile, Microsoft just announced its largest investment in India yet – a $3 billion commitment to expand Azure’s infrastructure in the country. The company also revealed that it will train 10 million people in AI by 2030 as a part of its ADVANTA(I)GE INDIA initiative. 

GitHub Copilot Workspace, launched in April 2024 as a technical preview, is a new tool that helps developers brainstorm, plan, build, test, and run code using natural language. It helps developers work faster by allowing them to interact with AI using natural language. The tool captures user intent and proposes plans of action, facilitating a collaborative coding environment where multiple developers can work simultaneously on tasks.

The platform integrates seamlessly with GitHub and enables users to open issues and generate solutions directly within the workspace. During the tenth edition of its flagship developer conference, GitHub Universe 24, the company introduced several new feature additions to the GitHub Copilot Workspace that help developers use natural language to generate a structured plan based on the specifications of an issue and seamlessly create a pull request. 

Copilot Workspace also leverages generative AI capabilities to assist in coding, allowing teams to iterate on their code effectively and make changes across all the files in a repository. 

]]>
Microsoft Copilot is a ‘Huge Disaster,’ Says Salesforce CEO Marc Benioff https://analyticsindiamag.com/ai-news-updates/microsoft-copilot-is-a-huge-disaster-says-salesforce-ceo-marc-benioff/ Sat, 11 Jan 2025 10:00:32 +0000 https://analyticsindiamag.com/?p=10161211 "Customers don’t look at them and don’t take them seriously in AI, nor should they, because they’re not even making the AI themselves. "]]>

Microsoft and Salesforce’s AI agent war is getting serious. Salesforce CEO Marc Benioff, in a recent podcast, criticised Microsoft’s Copilot. “This Copilot thing has been a huge disaster for them from a branding and validation standpoint. Customers don’t look at them and don’t take them seriously in AI,” Benioff said.

🔥 Ignite Innovation with Generative AI! 🌟 Join AWS AI Conclave 2025 in Bengaluru! >

He was responding to Microsoft Chief Satya Nadella, who recently, in a podcast, indirectly took a dig at Salesforce by saying that traditional SaaS companies will collapse in the AI agent era. “I think the notion that business applications exist—that’s probably where they’ll all collapse, right, in the agent era. Because if you think about it, right, they are essentially CRUD databases with a bunch of business logic,” he said.

“Business logic is all going to these agents, and these agents are going to be multi-repo CRUD. They’re not going to discriminate between what the back end is; they’re going to update multiple databases, and all the logic will be in the AI tier,” Nadella added. 

Furthermore, Benioff expressed his scepticism about Microsoft’s approach to integrating AI into its enterprise offerings, particularly through its Copilot initiative.

🌠 Spark the Future of AI 🌍 – Free Virtual MiniCon for Visionaries & Innovators! >

“Microsoft has disappointed everybody with how they’ve approached this kind of AI world,” Benioff said. He suggested that Copilot, which incorporates OpenAI technology into Microsoft’s products has failed to deliver transformative results for customers.

“Customers are not finding themselves transformed with this Copilot technology,” Benioff said.  He added, “I’ve spoken to these customers, they barely use it, and that’s only if they don’t already have a ChatGPT license or something like that in front of them.”

Benioff said Salesforce is making progress in the AI space, highlighting the company’s focus on delivering an “agentic platform” that is already in production and widely adopted by enterprise customers. “We’re delivering this at scale globally to our customers,” he said, adding that Salesforce is currently handling two trillion enterprise AI transactions per week.

He suggested that Microsoft’s AI strategy has not resonated with enterprise customers, claiming, “Customers don’t look at them and don’t take them seriously in AI, nor should they, because they’re not even making the AI themselves.”

Benioff also commented on Microsoft’s business strategy, accusing the company of being a “fast follower” in the software industry. “I’m sure they will try to copy our stuff like they usually do and move towards us, but we’re out there right now in production with thousands of customers,” he said.

At the recent Microsoft AI Tour in Bengaluru, Nadella said that “building agents should be as simple as creating a spreadsheet”. He introduced a no-code platform called Copilot Studio that allows users to create new agents based on their needs.

“Think of AI as a co-pilot for your work. It’s the UI for AI,” Nadella said, illustrating the role it will play as an interface between employees and the AI. He gave the example of an AI agent in a healthcare setting, describing a scenario where a doctor prepares for a tumour board meeting, and the AI creates the agenda, prioritises cases, and takes detailed notes during the discussion.

Meanwhile, Salesforce launched Agentforce 2.0 last month, an upgraded version of its digital labour platform designed to augment enterprise teams with autonomous AI agents.

]]>
Microsoft Launches rStar-Math, Achieves Top-Level Math Reasoning  https://analyticsindiamag.com/ai-news-updates/microsoft-launches-rstar-math-achieves-top-level-math-reasoning/ Thu, 09 Jan 2025 13:18:52 +0000 https://analyticsindiamag.com/?p=10161079 Smaller models are easier to use, require less powerful hardware, and make advanced AI tools available to more people and organisations]]>

Microsoft researchers have developed ‘rStar-Math’, a method that enables small language models (SLMs) to solve challenging math problems with remarkable accuracy, matching or even surpassing larger models like OpenAI’s o1. Instead of relying on knowledge distillation from bigger models, rStar-Math allows smaller models to improve independently through self-evolution. 

“Our work demonstrates that small language models can achieve frontier-level performance in math reasoning through self-evolution and careful step-by-step verification,” the researchers said in the paper.

Why does this matter? Smaller models are easier to use, require less powerful hardware, and make advanced AI tools available to more people and organisations. They are especially useful in areas like education, math, coding, and research, where accurate, step-by-step reasoning is crucial. 

The open-source release of rStar-Math and Microsoft’s Phi-4 model on Hugging Face allows others to customise and use these tools for a wide range of applications, making AI more affordable and accessible.

The system uses Monte Carlo Tree Search (MCTS), a strategy often used in games like chess, to tackle problems in smaller, manageable steps. Each step is validated with code execution to ensure accuracy, avoiding the common issue of producing correct answers with flawed reasoning.

Features of rStar-Math: rStar-Math incorporates three innovations to improve performance. It uses MCTS rollouts to generate step-by-step training data, ensuring accuracy. A process preference model (PPM) evaluates and guides intermediate steps without relying on imprecise scoring. The system then evolves iteratively over four rounds to refine models and data for solving increasingly complex problems.

On the MATH benchmark, accuracy increased from 58.8% to 90%, outperforming OpenAI’s o1-preview. The system also solved 53.3% of problems in the USA Math Olympiad (AIME), ranking in the top 20% of high school competitors. It performed strongly on other benchmarks, including GSM8K, Olympiad Bench, and college-level challenges.

The study highlights the potential of smaller AI models to achieve advanced reasoning capabilities typically associated with larger systems. It also shows how such models can develop intrinsic self-reflection, enabling them to identify and correct errors during problem-solving.

The framework, along with its code and data, is open-source and available on GitHub. This makes it accessible to researchers and developers, paving the way for smaller, more efficient AI systems capable of handling complex reasoning tasks.

]]>
Microsoft’s Small Language Model Phi-4 is Now Available for Free https://analyticsindiamag.com/ai-news-updates/microsofts-small-language-model-phi-4-is-now-available-for-free/ Wed, 08 Jan 2025 17:26:19 +0000 https://analyticsindiamag.com/?p=10160987 The company has made the small language model available on Hugging Face and supports ten Indian languages too!]]>

Microsoft has finally made its latest small language model, Phi-4, available on Hugging Face. The 14 billion-parameter model can now be downloaded, fine-tuned, and deployed for free. 

Why does it matter?

Phi-4 is a tiny model but outperforms Llama 3.3 70B (nearly five times bigger) and OpenAI’s GPT-4o Mini on several benchmarks. In math competition questions, Phi-4 outperformed Gemini 1.5 Pro and OpenAI’s GPT-4o.

Microsoft’s detailed technical paper discusses numerous techniques and the curation of some of the highest-quality datasets used to train the model. The model is said to excel at complex reasoning capabilities. 

In an exclusive interview with AIM, Harkirat Behl, one of the creators of the model, said, “Big models are trained on all kinds of data and store information that may not be relevant.” He added that with sufficient effort in curating high-quality data, it is possible to match the performance levels of these models – and perhaps even surpass them. 

For Phi-4, Microsoft has not experimented with inference optimisation, and the focus is mainly on synthetic data. He revealed that once the model architecture is released, developers will be able to optimise it further and quantise it to run it on devices for local use on PCs and laptops. 

After Meta, Microsoft is one of the other big companies making significant strides in building open-weight models. Phi-4’s predecessor, Phi-3.5, was also made available for free on Hugging Face. 

That said, Meta, or even Microsoft for that matter, isn’t leading the open-source model race; China-based DeepSeek-V3 holds that position for now.

Though a much larger model with 671B parameters, it outperformed Meta’s flagship Llama 3.1 405B parameter model, among many other closed-source models. It is also three times faster than its predecessor, DeepSeek V2. 

Behl said that Phi-4 supports ten Indian languages. “I personally made sure and worked hard to get Phi-4 to interpret ten most common Indian languages”. The company is surely betting big on India. 

Yesterday, Microsoft CEO Satya Nadella was in Bangalore for the company’s AI Tour. He announced a $3 billion investment, Microsoft’s largest for the country yet, to expand Azure’s infrastructure in the country. Moreover, the company is set to train 10 million people in AI by 2030 as a part of its ADVANTA(I)GE INDIA initiative.

Last week, Nadella also met Telangana CM A Revanth Reddy in Hyderabad to discuss the state’s technology priorities, including AI, generative AI, and cloud development.

]]>
Why IT May Become the HR for AI Agents in the Future https://analyticsindiamag.com/global-tech/why-it-may-become-the-hr-for-ai-agents-in-the-future/ Wed, 08 Jan 2025 13:48:04 +0000 https://analyticsindiamag.com/?p=10160962 “AI agents are a multi-trillion dollar opportunity.”]]>

The era of AI agents has officially begun. Making way for them, NVIDIA chief Jensen Huang predicted that in the future, an organisation’s IT department would evolve into an ‘HR department for AI’. It would be responsible for onboarding, managing, and maintaining a new generation of AI agents. 

At the ongoing Consumer Electronics Show (CES) 2025, Huang said, “In a lot of ways, the IT department of every company is going to be the HR department of AI agents in the future. Today, they manage and maintain a bunch of software from the IT industry; in the future, they will maintain, nurture, onboard, and improve a whole bunch of digital agents and provision them to the companies to use.”

He added that these AI agents will work along with human employees, offering unprecedented capabilities in automation and efficiency across industries. Speaking to a captivated audience, Huang explained how specialised AI agents will become integral to companies, performing tasks ranging from customer service to complex problem-solving.

“AI agents are a multi-trillion dollar opportunity,” he said.  

Jensen Huang’s CES 2025 keynote wasn’t just about breakthroughs—it was a glimpse into how AI agents will shape the future. From physical AI that reasons, plans, and acts, to tools like Cosmos and Project DIGITS, NVIDIA is building the foundation for AI agents to integrate seamlessly into our lives and industries,” said RagaAI founder Gaurav Agarwal. 

Far away at the Microsoft AI Tour in Bengaluru, Microsoft chief Satya Nadella said that “building agents should be as simple as creating a spreadsheet”. He introduced a no-code platform called Copilot Studio that allows users to create new agents based on their needs.

“Think of AI as a co-pilot for your work. It’s the UI for AI,” Nadella said, illustrating the role it will play as an interface between employees and the AI. He gave the example of an AI agent in a healthcare setting, describing a scenario where a doctor prepares for a tumour board meeting, and the AI creates the agenda, prioritises cases, and takes detailed notes during the discussion.

Nadella also unveiled Copilot Actions, which allows users to create cross-application workflows that connect people, data, and tasks across the Microsoft 365 ecosystem.

Likewise, OpenAI chief Sam Altman recently predicted that AI agents could enter the workforce by 2025. “We believe that, by 2025, we may see the first AI agents join the workforce and materially change the output of companies,” Altman wrote in a recent blog post.

Meanwhile, Google published a comprehensive whitepaper exploring the development and functionality of AI agents. Last December, the company launched Gemini 2, which it said will have agentic capabilities. 

Too Soon? 

Google’s senior product manager, Logan Kilpatrick, feels that it will take at least another year before AI agents become a reality. “2025 is the year of AI vision capabilities going mainstream; 2026 will be agents,” he said. 

“There’s a ~12-month capabilities-to-wide-scale-production gap. Most vision use cases work now but aren’t widely deployed. Agents still need a little more work for billion-user-level scale,” Kilpatrick added.

According to a recent report, it could take OpenAI some time to launch AI agents. This is because the company is concerned about prompt injection, a type of attack where a large language model is tricked into following instructions from a malicious user.

Huang may be right. It looks like the primary responsibility of enterprise IT teams will be to ensure that the agents are safe to use and do not have access to data they are not supposed to have.

In an interview with AIM, Okta customer identity CTO Bhawna Singh spoke about the growing need to authorise AI agents. “A platform is needed to handle both authentication and authorisation, making sure not all data is accessible to the agent,” she said. 

She explained that since these AI agents interact with each other, it is essential that they have the right data access. “We need to make sure these agents are verified,” she said.

Similarly, NVIDIA NeMo is helping companies onboard and train their AI agents, mimicking the process of onboarding a new employee. “Nemo is essentially a digital employee pipeline where companies can provide feedback, define company-specific vocabulary, and set guardrails on the behaviour of these agents,” Huang explained. 

Recently, AI startup Composio launched AgentAuth, a product that efficiently integrates AI agents with third-party tools and APIs. It supports a variety of authentication protocols, including OAuth 2.0, OAuth 1.0, API keys, JWT, and Basic Authentication. 

The platform also integrates over 250 widely used apps and services, catering to diverse needs such as customer relationship management (CRM) systems and ticketing platforms.

“The biggest problem that people face while building agents is connecting them to reliable tools. For example, if someone builds a sales agent, they would need to connect it with CRMs like Salesforce, HubSpot, etc,” said Karan Vaidya, Composio chief, in an exclusive interview with AIM

Vertical AI Agents

The AI agents market, valued at $5.1 billion in 2024, is projected to soar to $47.1 billion by 2030. Just as companies today rely on SaaS services, in the near future, they will hire specialised AI agents to meet their needs. Employees will widely use autonomous agents to perform tasks like attending meetings, making summaries, drafting emails, and translating meetings live.

“The early winners in LLM-based solutions might just be general-purpose platforms. Over time, vertical AI agents will emerge. It’s like how, in the box software world, the early vendors were just trying to convince people to use software… As the market matures, it will get more sophisticated, and vertical solutions will become dominant players,” said Jared Friedman, group partner at Y Combinator, in a recent podcast with YC president Gary Tan.

Salesforce chief Marc Benioff describes AI agents as digital labour. “I am the CEO of a company that manages agents and humans, and I have a digital labour platform at my disposal to augment my support, sales, service, and marketing,” said Benioff. 

In India, Freshworks unveiled a new version of Freddy AI, an autonomous agent that resolved 45% of customer support requests and 40% of IT service requests (in beta).

However, as AI agents become increasingly common, selecting the right sectors for their implementation will be crucial. “Departments such as sales, marketing, and finance usually have well-established software systems like CRM, ERP, analytics dashboards, etc., so they can plug AI agents directly into these data pipelines,” said Ramprakash Ramamoorthy, director of AI research at Zoho and ManageEngine.

Besides SaaS companies, several Indian startups are also building AI agents. Bengaluru-based AI startup KOGO AI, founded by Praveer Kochhar and Raj K Gopalakrishnan, is developing AI agents and solutions to simplify workflows and improve productivity for businesses. The company recently launched an AI agent store.“We are currently building an agent that can look at a database and actually think like a data scientist or a business analyst, generating extremely intelligent questions,” Kochhar said in a recent podcast with AIM.

]]>
Microsoft Shows How to Code in Kannada https://analyticsindiamag.com/ai-news-updates/microsoft-shows-how-to-code-in-kannada/ Tue, 07 Jan 2025 12:05:33 +0000 https://analyticsindiamag.com/?p=10160897 The company reiterated GitHub Copilot's capabilities to understand natural language. ]]>

At the Microsoft AI Tour in Bengaluru, the company took the stage to demonstrate GitHub Copilot Workspace and its capabilities to assist users in writing code through natural-language queries. 

The company did not miss a chance to add a local flavour to the event. Karan MV, director of international developer relations at GitHub, demonstrated that Copilot’s Workspace can understand Indian languages, including Hindi and Kannada, while writing code. 

The feature was introduced in May last year. Microsoft CEO Satya Nadella said, “Think about this – every person can now start programming, whether it’s in Hindi, Brazilian, or Portuguese, and bring back the joy of coding in their native language.”

GitHub Copilot Workspace is an environment that assists developers with AI through their coding workflow. 

During the demonstration, Karan wanted to add functionality that allows images to be added to a product website. He asked Copilot in Hindi, “Ek page pe image upload karne ke liye koi idea batiye,” which translates to “Any idea for uploading images on a page?” 

Copilot Workspace then generated ideas, such as adding a file input element, using drag-and-drop, or creating a third-party library area inside the website. After selecting the preferred idea, Copilot Workspace generated a list of tasks needed to add the feature and then generated the code for it. 

Karan then went ahead and asked to add a preview of the uploaded image. He asked in Kannada, “Image upload madidamele adara preview thorsi,” which translates to “Show me a preview of the image once it is uploaded”. Copilot Workspace successfully added code that would display a preview of the image on the website. 

Kannada is the official and most spoken language of Karnataka. The capital city of Bengaluru is home to over 360,000 software engineers


GitHub said that 17 million Indian developers use the platform, second only to the United States of America. However, in 2021, it was reported that a whopping 67% of engineers graduating from Indian colleges do not possess the necessary English-speaking skills. 

Good prompts require a decent mastery of the language being used to prompt, and the best results are directly proportional to the clarity of the input commands. “Remember when we used to make fun of ‘primitive’ people who believed that using the right kinds of words will result in magic and incredible creations?” said Bojan Tunguz, a former AI engineer at NVIDIA, indicating the importance of the right choice of words. 

Therefore, if an AI tool wants to assist the wealth of developer talent in India, it is only going to pose a second challenge if English is the only supported language.

Over the past few months, GitHub has released over 100 new additions to the Copilot Workspace, repeatedly suggesting that it is the ‘developer environment of the future’. The newly announced build and error repair feature suggests potential solutions to code errors and provides an option to fix the code manually or let Copilot Workspace rectify it automatically. 

GitHub also introduced enhancements to the Brainstorming Mode in Copilot Workspace. It lets users collaborate and explore all the repositories, suggest solutions for issues, ask questions and help improve the overall problem-solving process.

Last month, GitHub announced that it is making Copilot available for free within Visual Studio Code (VS Code). “We’re seeing a surge in Copilot Free activation in India and across APAC. It’s incredible to see (sic),” CEO Thomas Dohmke said in a post on X.

Note that GitHub Copilot is an AI coding assistant built inside editors, whereas GitHub Copilot Workspace is a broader platform that integrates AI throughout the entirety of the development workflow.  

Meanwhile, Microsoft just announced its largest investment in India yet, a $3 billion commitment to expand Azure’s infrastructure in the country. The company also revealed that it will train 10 million people in AI by 2030 as a part of its ADVANTA(I)GE INDIA initiative. 

]]>
Microsoft Announces $3 Billion Investment to Expand Azure in India https://analyticsindiamag.com/ai-news-updates/microsoft-announces-3-billion-investment-to-expand-azure-in-india/ Tue, 07 Jan 2025 09:05:51 +0000 https://analyticsindiamag.com/?p=10160868 Microsoft will train 10 million people in AI by 2030 as a part of its ADVANTA(I)GE INDIA initiative.]]>

Microsoft CEO Satya Nadella announced Microsoft’s largest investment in India yet, a $3 billion commitment to expand Azure’s infrastructure in the country, at the Microsoft AI Tour in Bengaluru on Tuesday. The investment will scale Microsoft’s regional cloud infrastructure to bolster AI and computing capabilities.

Emphasising the importance of infrastructure for AI innovation, Nadella said, “AI doesn’t sit on its own; it requires the entire compute stack.” He highlighted that Microsoft has built over 60 regions and 300 data centres worldwide, including existing regions in Central India, South India, and West India.

The additional investment is set to scale up Microsoft’s data centre operations across India. Speaking about the importance of infrastructure in AI development, Nadella said, “Infrastructure needs to be the highest priority, and we are innovating at every layer of it.”

Moreover, Microsoft will train 10 million people in AI by 2030 as a part of its ADVANTA(I)GE INDIA initiative. 

Microsoft recently announced that it is projected to invest approximately $80 billion in FY 2025 globally to establish AI-enabled data centres for training AI models and deploying AI and cloud applications worldwide.

Nadella also shared insights from his meeting with Prime Minister Narendra Modi, who discussed India’s vision for an AI-driven future. Nadella praised the synergy between India’s AI mission, the entrepreneurial ecosystem, and its demographics, describing them as elements of a “virtuous cycle”.

“Thank you, PM Narendra Modi ji, for your leadership. Excited to build on our commitment to making India AI-first and work together on our continued expansion in the country to ensure every Indian benefits from this AI platform shift,” Nadella said in a post on X.

“It was indeed a delight to meet you, Satya Nadella! Glad to know about Microsoft’s ambitious expansion and investment plans in India. It was also wonderful discussing various aspects of tech, innovation and AI in our meeting,” PM Modi replied.

Moreover, the CEO outlined a new metric for evaluating AI and compute efficiency, which he described as “tokens per dollar per watt”. Linking this to broader economic growth, he said, “Two years from now, five years from now, 10 years from now, we will be talking about the correlation between GDP growth and how efficiently communities and industries drive that equation.”

As part of its expansion, Microsoft plans to integrate renewable energy and advanced engineering practices to make its data centres more sustainable. Nadella highlighted innovations such as liquid-cooled AI accelerators, zero-waste construction, and zero-water usage facilities.

Nadella is currently touring India as part of the Microsoft AI Tour, where he spoke in Bengaluru on January 7 and will also speak in Delhi on January 8. Last week, he met with Telangana Chief Minister A. Revanth Reddy in Hyderabad to discuss the state’s technology priorities, including AI, generative AI, and cloud development.

Microsoft India employs over 20,000 people across 10 cities, including Ahmedabad, Bengaluru, Chennai, Gurugram, Delhi, Noida, Kolkata, Mumbai, and Pune, with Hyderabad alone accounting for half the workforce at 10,000 employees.

For the fiscal year ending March 30, Microsoft’s India business posted a 38.44% increase in net profit year over year, driven by the steady growth in cloud adoption and AI. 

]]>