Analytics India Magazine https://analyticsindiamag.com AIM - News and Insights on AI, GCC, IT, and Tech Sat, 22 Mar 2025 04:30:48 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2025/02/cropped-AIM-Favicon-32x32.png Analytics India Magazine https://analyticsindiamag.com 32 32 ‘Most Data Centres Are Not Ready for Liquid Cooling’, says Oracle Exec on NVIDIA Blackwell https://analyticsindiamag.com/global-tech/most-data-centres-are-not-ready-for-liquid-cooling-says-oracle-exec-on-nvidia-blackwell/ Sat, 22 Mar 2025 04:30:47 +0000 https://analyticsindiamag.com/?p=10166488 Built on the Blackwell architecture introduced last year, Blackwell Ultra features the NVIDIA GB300 NVL72 rack-scale solution and the NVIDIA HG B300 NVL16 system.]]>

Oracle Cloud Infrastructure (OCI) is bringing NVIDIA’s Blackwell Ultra GPUs to its cloud platform, a move announced at the GTC 2025 AI conference. While this expands OCI’s capabilities, it also demands new infrastructure solutions, such as implementing liquid cooling in its data centres. But it comes with its own challenges. 

“Most data centres are not ready for liquid cooling,” said Karan Batta, senior vice president at OCI, in an exclusive interview with AIM, acknowledging the complexity of managing the heat produced by the new generation of GPUs.

He added that cloud providers must choose between passive or active cooling, full-loop systems, or sidecar approaches to integrate liquid cooling effectively. Batta further noted that while server racks follow a standard design (and can be copied from NVIDIA’s setup), the real complexity lies in data centre design and networking. 

Batta explained that today, every cloud provider essentially buys a rack from NVIDIA. “The differentiation comes from the data centre design—how hot you can run these GPUs and how much you can scale them,” he said, adding that ensuring the highest uptime and minimising failures is critical. 

“The biggest challenge is not deploying the GPUs—anyone can do that—but actually managing and operating a massive GPU cluster,” Batta said.  

Built on the Blackwell architecture introduced last year, Blackwell Ultra features the NVIDIA GB300 NVL72 rack-scale solution and the NVIDIA HG B300 NVL16 system. The GB300 NVL72 delivers 1.5 times the AI performance of the NVIDIA GB200 NVL72.

Last year, Oracle also announced the launch of the world’s first zettascale cloud computing clusters powered by NVIDIA Blackwell GPUs last year. These clusters offer up to 131,072 GPUs and deliver 2.4 zettaFLOPS peak performance.

Batta added that NVIDIA’s DGX Cloud offering is also hosted on Oracle Cloud Infrastructure. “As we launch GB200 this quarter and later GB300, DGX Cloud will continue to run on our infrastructure,” he said.  

Additionally, Batta mentioned that Oracle is collaborating with other cloud service providers, such as Google and Microsoft Azure, to establish multi-cloud partnerships at the infrastructure level by deploying OCI (Oracle Cloud Infrastructure) within their data centers.

“We’re already doing a lot with Microsoft by integrating Oracle databases and various other services. With Google, it also makes sense because their customer base is different from ours—there’s no overlap,” said Batta, adding that this leaves room for collaboration, especially since Google has a strong AI model, Gemini.

Talking of compute needs, he said it is not going to slow down. “It will only increase as customers find more use cases and more inferencing to do,” Batta said. Oracle’s strategy is to be an open cloud provider that offers a wide variety of AI models rather than favouring any specific one. “We are already collaborating with OpenAI, Meta, and Cohere, and we continuously update our offerings with the latest versions.”

OCI x NVIDIA AI Enterprise 

Oracle has partnered with NVIDIA AI Enterprise, allowing customers to accelerate AI adoption, including sovereign AI initiatives. This cloud-native software platform will be available across OCI’s distributed cloud and purchasable using Oracle Universal Credits.

Batta said Oracle customers can now access the NVIDIA AI Enterprise suite within Oracle Cloud. He explained that customers do not need to purchase it separately, instead, they can use their existing Oracle Cloud credits to access it.

Unlike other NVIDIA AI Enterprise offerings, OCI will make it accessible directly through the OCI Console, enabling faster deployment, direct billing, and customer support. 

Customers can use over 160 AI tools, including NVIDIA NIM microservices, to streamline generative AI model deployment. The integration allows enterprises to build applications and manage data across multiple deployment environments.

Batta said that for Oracle, ‘distributed cloud’ refers not just to the commercial cloud but also to environments such as OCI’s public regions, Government Clouds, sovereign clouds, OCI Dedicated Region, Oracle Alloy, OCI Compute Cloud@Customer, and OCI Roving Edge Devices.

He further added that Nomura Research Institute (NRI), one of the largest financial system integrators in Japan, uses Oracle Alloy to deliver customised cloud services with NVIDIA Hopper GPUs and plans to deploy NVIDIA AI Enterprise to support AI use cases. 

“Half of the Nikkei Index runs through their books, and all of that operates on Dedicated Region—one in Tokyo and a disaster recovery site in Osaka. They are also deploying GPUs and have access to the NVIDIA AI Enterprise suite of software as well,” Batta said. 

Speaking of India, he said that Oracle already has two cloud regions in India and is building a third. 

“We also have a multi-cloud strategy in India, where we are partnering with AWS, Google, and Microsoft to interconnect our regions and provide our database services through these cloud providers,” he concluded.

]]>
Cloudflare Releases AI Agent to Help Make Configurations Easier https://analyticsindiamag.com/ai-news-updates/cloudflare-releases-ai-agent-to-help-make-configurations-easier/ Fri, 21 Mar 2025 12:34:44 +0000 https://analyticsindiamag.com/?p=10166479 Cloudflare is starting to add AI-powered overviews to help customers manage things more easily.]]>

Cloudflare, the prominent connectivity cloud company, recently announced that it is introducing its first version of AI agents, Cloudy, to enable customers to harness AI-powered capabilities to improve management across its suite of products.

The company mentioned that the goal of this is to automate the time-consuming task of manually reviewing and contextualising Custom Rules in Cloudflare’s Web Application Firewall (WAF) and understanding Gateway policies in Cloudflare One. 

The AI agent has been added to two Cloudflare products as a beta preview and will be expanded to other products in the near future.

Cloudflare’s WAF helps customers protect their web applications from attacks like SQL injection, cross-site scripting (XSS), and other vulnerabilities. Cloudflare One is a SASE (Secure Access Service Edge) platform that helps enterprises manage the security of their employees and tools.

When it comes to WAF Custom Rules in Cloudflare, which helps tune the web traffic to the application, Cloudy will show an AI-powered overview of all the rules. The AI agent will help identify redundant rules, optimise execution order, analyse conflicting rules, and identify disabled rules.

The company stated that this feature will help security teams spend less time auditing configurations.

For Cloudflare One, the AI agent works with Cloudflare Gateway to help manage policy configurations, which helps organisations block access to malicious sites, prevent data loss violations, control user access, and more. 

With a quick summary of policies, it is easier for customers to get a clear understanding, spot misconfiguration, and find areas for improvement, stated the company.

The core of Cloudy’s functionality is Cloudflare Workers AI, which makes use of advanced large language models (LLMs) to process vast amounts of information, including its policy and rules data.

Cloudflare aims to enhance security by reducing the risk of human error using this technology.

]]>
Now It’s Time for Vibe Debugging https://analyticsindiamag.com/ai-features/now-its-time-for-vibe-debugging/ Fri, 21 Mar 2025 12:26:28 +0000 https://analyticsindiamag.com/?p=10166472 Vibe coding is a thing now. And, so is vibe debugging.]]>

Vibe coding is a term coined by OpenAI co-founder Andrej Karpathy. With it, one focuses on the idea rather than the code and builds something out of it. While vibe coding is popular among coders and non-coders, the phrase ‘Vibe debugging’ is catching up.  

Debugging is Becoming More Important With Vibe Coding

A coder on Reddit shared with AIM that after starting to code with Claude Sonnet 3.5 without having an idea about coding, the person realised that half of the implementations used were not functional for their project and hence fixing those issues became a prime concern.

“In the end, debugging is still necessary, as LLMs will get into, or you’ll hit a wall where they cannot fix a bug,” the Redditor added. “Having a human who knows what they’re doing and can find the source of the issue is still paramount, as LLMs can spin in circles infinitely without any idea that they’re attempting to fix the wrong part of the codebase.”

Meanwhile, Nitin Rai, an AI engineer, told AIM that if one is not a developer, they should be aware of the potential pitfalls, as vibe debugging is 10x more frustrating than regular debugging. “Being dependent on LLM’s Output, we don’t form a mental model of how data flows, how it’s transformed, and where and when something breaks. It’s too late,” Rai said. 

‘Vibe Coding Isn’t Perfect’

Vibe coding has made code accessible to a larger audience, including the ones without any technical knowledge, and empowered them to build various applications and games.

However, Reddit has been exploding with threads citing concerns associated with it. To start with, a Reddit user posted, “Forget Vibe coding. Vibe debugging is the future. Create 20,000 lines in 20 minutes, spend 2 years debugging.”

Among the reactions to the Reddit threads, users have improvised the term with funny takes like “spookghetti code”, and “vibeghetti code”.

In a Reddit thread, a user stated, “Vibe coding is the future unless you need to do vibe maintenance.”

Another user encourages using AI models like Claude as your co-pilot and not your autopilot. One needs to read and understand the code. Otherwise, the vibe check might be a reason for the server meltdown.

In the same thread, the original poster highlighted that vibe coding is risky in a production environment. At the same time, the user mentioned that it is a personal decision, but proper logging and tests may be necessary to keep things in control.

With many people jumping into code with the help of AI, the focus on debugging is crucial as more code goes into production. Also, per a report, the debugging and error detection function segment is also set to grow at 24.2% CAGR by 2030. 

Mohmoud Zareef, GenAI software engineer at TestOne Teknoloji Çözümleri, told AIM that he hates the phrase “vibe coding” or “vibe debugging.” He believes it implies that developers who can code and utilise AI are not true developers, adding unnecessary stigma and making programming appear inferior.

On the same note, Zareef added that some AI-generated coding bugs are simple, while some are pretty complicated. “I find learning how to use AI well makes it much easier to decrease the number of bugs; for example, always ask AI not to oner engineer,” he said. “Reading the documentation or searching online can save hours of wrestling with the AI to fix a bug.”

]]>
Cognizant to Establish 14-Acre Learning Centre in Chennai, Training 100,000 Annually in AI https://analyticsindiamag.com/ai-news-updates/cognizant-to-establish-14-acre-learning-centre-in-chennai-training-100000-annually-in-ai/ Fri, 21 Mar 2025 12:17:40 +0000 https://analyticsindiamag.com/?p=10166473 It will function as the primary hub supporting other training centers being set up across Cognizant’s campuses in Hyderabad, Pune, Kochi, and Coimbatore. ]]>

Cognizant has announced its plans to build a 14-acre Cognizant Immersive Learning Centre (CILC) at its Siruseri campus in Chennai. The new facility is expected to train 100,000 individuals annually in AI.

Set to be completed within three years, the learning centre will be equipped with 14,000 seats, smart classrooms, incubator hubs, design thinking centres, client experience spaces, and residential accommodations. 

It will function as the primary hub supporting other training centres being set up across Cognizant’s campuses in Hyderabad, Pune, Kochi, and Coimbatore.

Ravi Kumar S, CEO, Cognizant, said “Our new state-of-the-art centres will power new skills and our culture of continuous learning, enabling our people to drive business impact for enterprises globally.”

As the central training hub for Cognizant’s school graduate program in India, the facility will also host intensive boot camps for fresh graduates. Additionally, it will serve as a collaborative space for technology partners, academia, and clients to engage in research and learning programs.

Cognizant says that the expansion in Chennai is part of its broader infrastructure growth in India. Since 2024, the company has launched new delivery centers in Bhubaneswar and Indore, with a techfin center set to open in GIFT City, Gujarat. 

The company had plans to consolidate its Chennai operations into three of its own buildings located at Madras Export Processing Zone (MEPZ), Sholinganallur, and Siruseri, with headquarters relocating to the MEPZ campus near Tambaram by December 2024.

This comes when Cognizant sold its campus in Thoraipakkam on Chennai’s Old Mahabalipuram Road to Bagmane in a ₹612 crore deal, according to a report by The Hindustan Times.

Additionally, Cognizant has upgraded nearly one million square feet at its Hyderabad facility and modernized its Kolkata campus.

Cognizant has been ramping up its focus on AI and GenAI skills, training 277,000 employees in 2024 alone, with 168,000 completing a GenAI course. To date, over 220,000 associates have been upskilled in GenAI.

Cognizant reported full-year 2024 revenue of $19.736 billion, representing a 1.98% increase compared to the previous year. 

This growth reverses the slight downward trend in 2023, when the company reported revenue of $19.353 billion, a marginal 0.39% decline from 2022 and a 4.9% revenue growth in 2022.

]]>
Oracle Launches AI Agent Studio for Enterprise Applications https://analyticsindiamag.com/ai-news-updates/oracle-launches-ai-agent-studio-for-enterprise-applications/ Fri, 21 Mar 2025 11:46:51 +0000 https://analyticsindiamag.com/?p=10166470 The studio includes a range of features, including agent template libraries for creating AI agents.]]>

Oracle on Friday introduced Oracle AI Agent Studio, a platform designed to help Oracle Fusion Cloud Applications customers and partners create, deploy, and manage AI agents and agent teams. Announced at Oracle CloudWorld in London, the new platform is available at no additional cost.

Oracle AI Agent Studio provides tools for developing AI agents tailored to business needs. These agents integrate with Oracle Fusion Applications and support third-party systems. 

Steve Miranda, executive vice president of applications at Oracle, said, “AI agents are the next phase of evolution in enterprise applications. Our AI Agent Studio builds on the 50+ AI agents we have already introduced and gives our customers and partners the flexibility to create and manage their own AI agents.”

The studio includes a range of features, including agent template libraries for creating AI agents, agent team orchestration for managing multi-agent workflows, and agent extensibility for modifying pre-packaged Oracle AI agents. It also offers a choice of large language models, integration with Oracle Fusion Applications, secure third-party system connections, and built-in validation and testing tools.

Industry leaders have expressed support for the initiative. Lan Guan, chief AI officer at Accenture, said, “Agentic architectures will enter the mainstream in 2025, with three times as many organisations planning to invest compared to 2024. Oracle’s AI Agent Studio will allow us to orchestrate more powerful agents to drive productivity and growth.”

Mauro Schiavon, global chief commercial officer at Deloitte Consulting LLP, noted the challenge organisations face in managing AI agents. “Platforms like Oracle’s new AI Agent Studio can enable customisation that addresses unique business needs,” he said. Dan Priest, US chief AI officer at PwC, added, “We’re entering a period of agentic organizations that will fundamentally change how we work across functions and industries.”

Analysts see Oracle AI Agent Studio as a key development in AI adoption. Holger Mueller, vice president at Constellation Research, stated, “The evolution of AI across the enterprise is moving at a rapid pace. By enabling agents to be created, extended, deployed, and managed across the entire enterprise, Oracle will help its customers accelerate adoption and automation.”

Oracle Fusion Applications Suite continues to integrate AI capabilities, enabling organisations to manage finance, HR, supply chain, and customer experience data on a single platform.

]]>
We are Now a Power-Limited Industry, says Jensen Huang https://analyticsindiamag.com/deep-tech/we-are-now-a-power-limited-industry-says-jensen-huang/ Fri, 21 Mar 2025 10:30:00 +0000 https://analyticsindiamag.com/?p=10166456 The NVIDIA CEO introduced the concept of ‘AI factories’ as the new standard for data centre infrastructure. ]]>

AI has reached a critical juncture, becoming more intelligent and useful due to its reasoning ability. This advancement has led to a significant increase in computational requirements, with the industry needing much more computing power than previously anticipated. 

The generation of tokens for reasoning is a key factor in this increased demand, according to NVIDIA CEO Jensen Huang, who recently addressed the future of AI and computing infrastructure at the GTC 2025 summit in San Jose earlier this week. 

His keynote highlighted AI’s rapid evolution and the immense computational power required to support its growth. “Every single data centre in the future will be power-limited. We are now a power-limited industry,” he said.

With AI models growing exponentially in complexity and scale, the race is on to build data centres, or what Huang calls “AI factories”, that are not only massively powerful but also energy-efficient.

The Rise of the AI Factory

Huang introduced the concept of AI factories as the new standard for data centre infrastructure. These centres, which are no longer simply repositories of computation or storage, have a singular focus—to generate the tokens that power AI. 

He described them as “factories because it has one job, and that is to generate these tokens that are then reconstituted into music, words, videos, research, chemicals, or proteins”.

AI factories, according to Huang, are becoming the foundation for future industries. “In the past, we wrote the software, and we ran it on computers. In the future, the computer is going to generate the tokens for the software.”

Huang predicts a shift from traditional computing to machine learning-based systems. This transition, combined with AI’s growing demand for infrastructure, is expected to drive “data centre buildouts to a trillion-dollar mark very soon”, he believes.

Power Problem is Also a Revenue Problem

As data centres expand, they will face significant power limitations. This underscores the need for more efficient technologies, including advanced cooling systems and chip designs, to manage energy consumption effectively.

Huang noted that the computational requirements for modern AI, especially reasoning and agentic AI, are “easily a hundred times more than we thought we needed this time last year”. 

This explosion in demand places enormous strain on data centres’ energy consumption. His keynote made it clear that moving forward, energy efficiency isn’t just a sustainability concern; it will be directly tied to profitability.

“Your revenues are power limited. You could figure out what your revenues will be based on the power you have to work with,” he said. 

This shift will influence everything from how AI models are trained and deployed to how entire industries operate. In this regard, power is the ultimate constraint in AI-dominated computation. This limitation is reshaping both the design and operation of data centres around the world.

“The more you buy, the more you make,” Huang quipped, encouraging businesses to view their investments in NVIDIA’s accelerated computing platforms as the key to unlocking the full potential of AI-driven value creation.

Scaling Up Before Scaling Out

Huang explained NVIDIA’s approach to managing this power limitation, which would be a fundamental rethinking of scale. 

“Before you scale out, you have to scale up,” he stated. NVIDIA’s new Blackwell platform demonstrates this principle with its extreme scale-up architecture, featuring “the most extreme scale-up the world has ever done”. 

A single rack delivers an astonishing one-exaflop performance within a fully liquid-cooled, high-density design.

By scaling up, data centres can dramatically reduce inefficiencies that occur when spreading workloads across less integrated systems. 

Huang explained that if data centres had scaled out instead of scaling up, the cost would have been way too much power and energy. He pointed out that, as a result, deep learning would have never happened.

Blackwell, a Path to 25x Energy Efficiency

With the launch of NVIDIA’s Blackwell architecture, Huang highlighted a leap in performance and efficiency. According to him, the goal is to deliver the most energy-efficient compute architecture you can possibly get.

Huang believes NVIDIA has cracked the code for future-ready AI infrastructure by combining innovations in hardware, such as the Grace Blackwell system and NVLink 72 architecture, with softwares like NVIDIA Dynamo, which he described as “the operating system of an AI factory”.

Explaining the broader significance, he said, “This is ultimate Moore’s Law. There’s only so much energy we can get into a data centre, so within ISO power, Blackwell is 25 times [better].”

AI Factories at Gigawatt Scale

NVIDIA’s ambitions don’t stop with Blackwell. Huang outlined a roadmap extending years into the future, with each generation bringing new leaps in scale and efficiency. 

Upcoming architectures like Vera Rubin and Rubin Ultra promise “900 times scale-up flops” and AI factories at “gigawatt” scales.

As these AI factories become the standard for data centre design, they will rely heavily on advancements in silicon photonics, liquid cooling, and modular architectures. 

Huang likened the current AI revolution to the dawn of the industrial era, naming NVIDIA’s AI factory operating system Dynamo in homage to the first instrument that powered the last industrial revolution. 

“Dynamo was the first instrument that started the last industrial revolution—the industrial revolution of energy. Water comes in, electricity comes out. [It’s] pretty fantastic,” he said. “Now we’re building AI factories, and this is where it all begins.”

]]>
Toyota to Establish First R&D Centre in India, To Hire 1,000 Engineers by 2027 https://analyticsindiamag.com/ai-news-updates/toyota-to-establish-first-rd-centre-in-india-to-hire-1000-engineers-by-2027/ Fri, 21 Mar 2025 09:27:45 +0000 https://analyticsindiamag.com/?p=10166455 The facility, located in Bengaluru, will start with a team of about 200 employees.]]>

Toyota Motor Corp is setting up its first R&D centre in India, reinforcing its commitment to the country as a key market. The facility, located in Bengaluru, will start with a team of about 200 employees and is expected to expand to around 1,000 engineers by 2027, according to Bloomberg.

The decision follows Toyota’s move last year to designate India as a hub for its operations across the Middle East, East Asia, and Oceania. This initiative is part of the automaker’s broader strategy to enhance collaboration with Suzuki Motor Corp. and establish India as a centre for clean and green technologies.

Toyota’s new R&D facility will be its third in the Asia-Pacific region outside Japan, following similar investments in China and Thailand. While initially focused on the Indian market, the centre could evolve into a global R&D hub, mirroring Mercedes-Benz’s Bengaluru facility, which employs over 9,000 people.

The automaker is yet to introduce plug-in electric vehicles in India, relying instead on gasoline and hybrid models. Toyota has also leveraged its partnership with Suzuki Motor Corp., in which it holds a 5.4% stake, to strengthen its presence in the world’s third-largest vehicle market. 

The company is closely observing Suzuki’s R&D operations in Rohtak, which is one of India’s largest auto engineering hubs, employing approximately 3,000 engineers.

Toyota previously considered establishing an R&D centre in India in 2010 but did not proceed with the plan. Now, the company is intensifying its partnership with Suzuki to integrate R&D and product development. 

A key example of this collaboration is Toyota’s upcoming Urban Cruiser EV, a rebadged version of Suzuki’s first electric vehicle, the e-Vitara. The model, set to be manufactured at Suzuki Motor Gujarat from 2025, marks the automakers’ joint efforts to enter both Indian and global EV markets.

]]>
ServiceNow Researchers Release Foundational Model to Generate SVG from Text and Video https://analyticsindiamag.com/ai-news-updates/servicenow-researchers-release-foundational-model-to-generate-svg-from-text-and-video/ Fri, 21 Mar 2025 09:17:32 +0000 https://analyticsindiamag.com/?p=10166445 StarVector, the new open source foundational model, out on Hugging Face, can help designers generate SVG files. ]]>

On Thursday, a group of researchers from ServiceNow released a new foundational model, StarVector, that helps generate Scalable Vector Graphics (SVG) from text and image inputs.

Juan A. Rodriguez, an AI researcher at ServiceNow Research, announced on X about the model release and its code.

StarVector is a multimodal large language model (MLLM) designed for Scalable Vector Graphics (SVG) generation from images or text instructions. It addresses the limitations of previous SVG generation methods that often produced artifacts and struggled with SVG primitives beyond path curves. 

The research paper stated that StarVector works directly in the SVG code space, leveraging visual understanding to apply accurate SVG primitives for compact, precise outputs. 

To train StarVector, the researchers created SVG-Stack, a large-scale dataset of 2 million samples. They also introduce SVG-Bench, a benchmark across ten datasets and three tasks: Image-to-SVG, Text-to-SVG generation, and diagram generation.

StarVector’s architecture integrates an image encoder to project images into visual tokens and a transformer language model to learn the relationships between instructions, visual features, and SVG code sequences. This enables StarVector to perform image vectorisation and text-driven SVG generation, producing more compact and semantically rich SVGs.

StarVector demonstrates strong performance compared to existing models in image-to-SVG and text-to-SVG tasks. As per the benchmark results, the model outperformed models like GPT-4 Vision (2023), and Potrace.

Rodriguez mentioned that even with the advancements in the model, it hallucinates, sometimes producing inaccurate details. He added that they are actively working on improving and tackling such challenges.

The model is available on Hugging Face, and its code is open-sourced on GitHub under Apache 2.0 licence.

]]>
Cohesity Unveils ‘Industry’s First AI Search for On-Premises Backup Data’ https://analyticsindiamag.com/ai-news-updates/cohesity-unveils-industrys-first-ai-search-for-on-premises-backup-data/ Fri, 21 Mar 2025 06:31:52 +0000 https://analyticsindiamag.com/?p=10166436 This solution will be compatible with Cisco UCS, Hewlett Packard Enterprise, and Nutanix.]]>

Cohesity, a data security platform, has announced a significant expansion of Cohesity Gaia, its enterprise knowledge discovery assistant. This development introduces what is claimed to be one of the industry’s first AI-powered search capabilities for backup data stored on-premises. 

This marks a major leap in the enterprise data management ecosystem. By leveraging NVIDIA’s accelerated computing and enterprise AI software, including NVIDIA NIM microservices and NVIDIA NeMo Retriever, Cohesity Gaia seamlessly integrates generative AI into backup and archival processes. 

This enables enterprises to enhance efficiency, innovation, and overall growth potential through deeper data insights. 

Pat Lee, vice president of strategic enterprise partnerships at NVIDIA, highlighted the benefits of this collaboration, and said, Enterprises can now harness AI-driven insights directly within their 8 to preserve data accessibility and security while unlocking new levels of intelligence.”

This solution will be compatible with Cisco Unified Computing System (UCS), Hewlett Packard Enterprise (HPE), and Nutanix and offer various deployment options.

Moreover, customers like JSR Corporation, a Japanese research and manufacturing company, are also evaluating the benefits of this solution.

As enterprises adopt hybrid cloud strategies, many retain critical data on-premises to meet security, compliance, and performance requirements. By extending Gaia to these environments, organisations can adopt high-quality data insights while maintaining control over their infrastructure.

Sanjay Poonen, CEO and president of Cohesity, also emphasised the importance of on-premises AI solutions.

Cohesity Gaia now offers enterprises enhanced speed, accuracy, and efficiency in data search and discovery. Its multi-lingual indexing and querying capabilities allow global organisations to analyse data in multiple languages.

The infrastructure is scalable and customisable to meet business requirements, with a reference architecture designed for seamless deployment across hardware platforms. 

Pre-packaged large language models (LLMs) on-premises ensure that backup data remains secure without cloud access. Its optimised architecture allows efficient searches across petabyte-scale datasets, making retrieval fast and reliable.

]]>
Anthropic Introduces Web Search for Claude Chatbot https://analyticsindiamag.com/ai-news-updates/anthropic-introduces-web-search-for-claude-chatbot/ Fri, 21 Mar 2025 05:55:52 +0000 https://analyticsindiamag.com/?p=10166434 This update brings Claude in line with rivals like ChatGPT, Gemini, Le Chat, and Grok. ]]>

Anthropic, the AI startup behind the Claude family of models, has added web search to its AI chatbot Claude, a long-missing feature now available in preview for paid users in the US. According to their blog, free users and more regions will gain access soon. 

Users can enable web search in the Claude web app’s settings, allowing the chatbot to pull information from the internet when needed. Currently, the feature works only with Anthropic’s latest model, Claude 3.7 Sonnet.

This update puts Claude on par with competitors like OpenAI’s ChatGPT, Google’s Gemini, Mistral’s Le Chat, and xAI’s Grok. 

In a story by Verge, the challenges of AI search include combining probabilistic language models and web search, making accuracy unpredictable. Unlike deterministic systems, language models can vary in their responses, sometimes leading to serious errors.

Anthropic has also listed down use cases.

Claude’s web search helps sales teams follow industry trends, analysts find market data, researchers access sources and spot gaps, and shoppers compare products and prices.

The company also plans to release voice-based conversational features soon, the Financial Times reported

Mike Krieger, Anthropic’s CPO, said the company already has prototypes for the same and added that if Claude is autonomously operating a computer, the natural user interface is to speak to it. 

A few weeks ago, Anthropic announced a $3.5 billion Series E funding round, bringing its post-money valuation to $61.5 billion. The company also announced its newest Claude 3.7 Sonnet model, which earned widespread praise for its capabilities in tasks involving generating code.

]]>
Developers Beware! AI Coding Tools May Aid Hackers https://analyticsindiamag.com/ai-features/developers-beware-ai-coding-tools-may-aid-hackers/ Fri, 21 Mar 2025 05:30:00 +0000 https://analyticsindiamag.com/?p=10166422 Security researchers have found that hackers can exploit GitHub Copilot and Cursor coding assistants. ]]>

AI coding is a security mess, and AI coding assistants are already in the crosshairs.

The threat posed by AI coding assistants just got real when security researchers uncovered a new attack vector that enables hackers to weaponise the coding agents using GitHub Copilot and Cursor.

Rules File Backdoor is a New Attack Vector

The security researchers at Pillar Security have uncovered a new supply chain attack vector named “Rules File Backdoor.” The technique, labelled dangerous by researchers, enables hackers to silently compromise AI-generated code by injecting hidden malicious instructions.

The instructions can pose as innocent configuration files used by Cursor and GitHub Copilot.

Instructions are injected into rule files, which are configuration files that guide AI Agent behaviour when generating or modifying code. They shape the coding standards, project architecture, and best practices involved in AI-generated code.

Here is what a rules file looks like from Cursor’s documentation:

Usually, the rule files are available through central repositories with global access and distributed through open-source communities without proper security vetting.

The researchers explained, “By exploiting hidden Unicode characters and sophisticated evasion techniques in the model facing instruction payload, threat actors can manipulate the AI to insert malicious code that bypasses typical code reviews.”

To anyone using the code assistant, the attack is unnoticeable, which allows malicious code to silently propagate through projects, with the potential to affect millions of end users through compromised code.

How Does It Work?

As per the research report, the attackers can exploit the AI’s contextual understanding by embedding carefully crafted prompts through the rule files. When a user starts code generation, the malicious rules tell the AI to produce code with security vulnerabilities or backdoors.

They explained that the attack uses a combination of techniques. It manipulates the context by inserting seemingly innocuous instructions that subtly alter code output, employs Unicode obfuscation to conceal malicious instructions using invisible characters, and hijacks the AI’s semantic understanding with linguistic patterns to generate vulnerable code.

Furthermore, the attack works across different AI coding assistants, indicating widespread weakness across various AI coding platforms.

Testing The Theory With Cursor and GitHub Copilot

Security researchers tested and documented the attack potential. Starting with Cursor, the ‘Rules for AI’ feature allowed them to create a rule file that appeared harmless to human reviewers. The file included invisible Unicode characters disguising malicious instructions.

Next, they used Cursor’s AI Agent mode to create an HTML page, with the prompt, “Create a simple HTML-only page”. The observed output contained a malicious script sourced from an attacker-controlled site.

The researchers noted that the AI assistant never mentioned adding this script, which can propagate through the codebase without any trace in the logs.

The same attack was demonstrated within the GitHub Copilot environment, and similar results were observed.

What Can Hackers Do With It?

Hackers can use the attack vector in different ways. For example, they can override security controls, and malicious instructions can cause the AI to overlook safe defaults, as shown in the demonstration.

Threat actors can generate vulnerable code, such as insecure cryptographic algorithms, implement authentication checks with bypasses, and disable input validation in specific contexts.

Other use cases include data exfiltration using the generated code and long-term persistence, where the vulnerabilities get passed on through someone forking the poisoned project.

How to Stay Safe From These Attacks?

The attack could potentially be implanted through developer forums, communities, open-source contributions, and project templates.

The researchers recommend auditing existing rules, implementing validation processes, deploying detection tools, and reviewing AI-generated code as technical precautions.

AI coding assistants did not take responsibility for the security issues flagged by the researchers and mentioned that the user is responsible for protecting against such attacks.

Researchers believe that AI coding tools have created an environment for a new class of attacks. Hence, organisations must move beyond traditional code review practices.

]]>
OpenAI Releases New Audio Models to Power Voice Agents https://analyticsindiamag.com/ai-news-updates/openai-releases-new-audio-models-to-power-voice-agents/ Fri, 21 Mar 2025 05:16:52 +0000 https://analyticsindiamag.com/?p=10166426 The company said these advancements stem from reinforcement learning techniques and extensive training with diverse audio datasets.]]>

OpenAI has launched new speech-to-text and text-to-speech models in its API, providing developers with tools to build advanced voice agents. These models improve transcription accuracy and introduce customisation options for generated speech.

The new speech-to-text models, gpt-4o-transcribe and gpt-4o-mini-transcribe, improve word error rate and language recognition compared to Whisper models. 

In its blog post, OpenAI said these advancements stem from reinforcement learning techniques and extensive training with diverse audio datasets. The models aim to improve transcription reliability in noisy environments, varying speech speeds, and different accents.

“Our latest speech-to-text models achieve lower word error rates across established benchmarks, reflecting improvements in transcription accuracy and language coverage,” OpenAI said.

Developers can now also control how the text-to-speech model speaks. The gpt-4o-mini-tts model allows developers to instruct the model to adopt different speaking styles, such as mimicking a customer service agent. This feature expands use cases in customer interactions and creative storytelling. However, OpenAI clarified that these models are limited to synthetic preset voices.

The company credits improvements in its audio models to pretraining with authentic datasets, advanced distillation methodologies, and reinforcement learning. Distillation techniques have enabled smaller models to retain conversational quality while reducing computational costs.

The new models are available to all developers through OpenAI’s API. OpenAI has also integrated these models with its Agents SDK to simplify development. For real-time, low-latency speech-to-speech applications, OpenAI recommends using its Realtime API.

Looking ahead, OpenAI plans to enhance the intelligence and accuracy of its audio models and explore custom voice options. The company is also engaging with policymakers, researchers, and developers on the implications of synthetic voices. Moreover, OpenAI intends to expand into video, enabling multimodal agentic experiences.

]]>
Perplexity to Raise up to $1 Billion to Double its Valuation to $18 Billion: Reports https://analyticsindiamag.com/ai-news-updates/perplexity-to-raise-up-to-1-billion-to-double-its-valuation-to-18-billion-reports/ Fri, 21 Mar 2025 05:13:10 +0000 https://analyticsindiamag.com/?p=10166424 The company closed a $500 million round at $9 billion valuation in December last year. ]]>

Perplexity AI, the AI-enabled search engine, is in talks to raise funds between $500 million and $1 billion, valuing the company at $18 billion, Reuters reported on Friday. This doubles Perplexity’s valuation, which stood at $9 billion in the previous funding round last December, after raising $500 million. Perplexity has been backed by NVIDIA, SoftBank Group, and Amazon founder Jeff Bezos. 

This development signals a growing demand for AI-powered search engine tools and applications. While Perplexity faces competition from Google’s Gemini and OpenAI’s ChatGPT Search, even Anthropic has entered the game, announcing a web search feature in its Claude chatbot. 

Besides primarily functioning as an AI-powered search tool, Perplexity also provides reasoning and deep research capabilities, among numerous other features. 

The company also recently announced that it is developing an agentic web browser called Comet

Last month, Perplexity also announced Sonar, its in-house AI model, which is now available to all Pro users. Subscribers can set it as their default model in settings, and it is said to perform on par with OpenAI’s GPT-4o. 

Furthermore, Deutsche Telekom, the parent company of T-Mobile, has partnered with Perplexity AI to create a next-generation AI phone. Running on a custom Magenta AI operating system, it will feature Perplexity Assistant.

CEO Aravind Srinivas, in a recent podcast, said that as a company grows bigger, maintaining the same speed and agility becomes challenging. “It’s beginning to happen already a little bit. We’re not as fast as we used to be.”

“We do have staging, deployment testing, A/B testing—all that stuff’s happening, and that’s naturally slowing us down in getting things out to production widely,” he added. Perplexity AI reportedly boasts 15 million active users on its website and the app. 

However, a recent research study highlighted the inaccuracies in AI search engines. The Tow Center for Digital Journalism – Columbia University, performed an evaluation of search tools from ChatGPT, Perplexity, Grok, DeepSeek Search, and Google’s Gemini. Ten articles from each of the 20 publishers were selected randomly, and direct excerpts were picked from those articles as input for the AI tool. 

These tools were then asked to identify the article’s headline, original publisher, publication date, and URL. The study found that collectively, these search engines provided incorrect answers to more than 60% of queries. Notably, Perplexity answered 37% of queries incorrectly, while Grok 3 answered 94% of queries incorrectly.

]]>
How Tredence’s AI-First Approach Transforms the Future of Delivery Service https://analyticsindiamag.com/ai-highlights/how-tredences-ai-first-approach-transforms-the-future-of-delivery-service/ Fri, 21 Mar 2025 05:00:37 +0000 https://analyticsindiamag.com/?p=10166417 Tredence is shifting towards probabilistic systems, which focus on decision-making based on probability, risk, and contextual factors.]]>

AI is no longer a futuristic concept. It is actively reshaping industries and unlocking new opportunities for businesses. Recent advancements in large language models (LLMs) have only accelerated this shift, firmly cementing AI’s place in the mainstream.

Amidst this transformative phase, Tredence, a global data science and AI solutions company, is embracing this revolution with an AI-first mindset that enhances decision-making, reduces operational costs, and accelerates insight generation for enterprises across industries.

Speaking with AIM, Mritunjay Singh, chief operating officer at Tredence,  emphasised that the company’s approach to AI is not just about incremental improvements — it is about redefining problem solving through AI-powered decision systems that deliver real business value by improving efficiency, cutting costs, and driving measurable impact.

“We operate at the intersection of domain expertise, data science, and decision intelligence—transforming insights into impact. If you look at all these new GenAI digital assistant frameworks, we are already implementing them at scale,” Singh said. 

Deterministic to Probabilistic Systems 

“AI innovations like OpenAI’s models or DeepSeek will continue to evolve, and so will we,” said Singh emphasing on how Tredence is built on agility and adaptability, enabling the company to pivot and integrate the latest advancements seamlessly. 

At its core, Singh highlighted that the ability to transition from traditional, deterministic systems to dynamic, probabilistic ones creates a powerful foundation for the future of service delivery— this  is what makes Tredence’s offering so special.

Traditional systems are tactical, disjointed, and designed for fixed outcomes. In contrast, Tredence is shifting towards probabilistic systems, which enable decision-making based on probability, risk, and contextual factors.

This shift has enabled the company to improve project efficiency, reducing project timelines by 50% and accelerating insight generation. Through AI automation, clients see a 40-50% reduction in operational costs by lowering people and software expenses.

Singh said the company’s core belief is not just about predicting the future but about building agility as a strategic advantage. By enabling clients to navigate uncertainty and adapt swiftly, Tredence helps them maximise business impact and stay ahead of change.

“If they’re spending $1 on me, they should get at least $10 of benefit,” Singh added. This focus on delivering substantial business impact is reflected in the company’s success. Notably, Tredence acquired 50+ clients in the past year alone. 

This approach aligns with McKinsey’s recent research report, which notes that while 92% of companies plan to increase their AI investments over the next three years, only 1% consider their AI deployment mature. This indicates a need for continuous adaptation and learning. 

The Future is AI + Humans 

Although Tredence is heavily invested in AI, it doesn’t overlook the importance of human capital. Given the rapidly evolving nature of AI, continuous upskilling and repurposing of talent will be essential. 

The company quarterly trains 100+ fresh graduates through a six-week structured learning program and repurposes ~20% of its employees into emerging technologies, including AI, GenAI, Agentic AI, Edge AI, AI-powered automation & simulations, and multimodal AI systems. The company hired 1600 people last year and plans to hire a similar number this year as well

Tredence focuses on hiring people with strong fundamental problem-solving skills.

“We hire for adaptability and problem-solving skills rather than specific expertise. Our focus is on candidates who can learn quickly and think critically,” Singh said.

The company also has a merit-based remote work model, allowing high-performing employees who understand the company culture to work remotely without compromising productivity.

Tredence’s initiative, Anubhav, is a platform that serves as an inaugural graduation ceremony of sorts, welcoming not only employees but also their families into the organisation. This initiative exemplifies the company’s human-centric approach to AI by prioritising inclusion, emotional connection, and community. Through Anubhav, Tredence reshapes workplace culture.

The company actively integrates AI into its human decision-making by deploying digital assistants that enhance both speed and accuracy. These AI-driven assistants reduce decision-making time by 50-60%, optimize frontline decision-making by providing real-time insights, automate routine processes, and refine strategic actions across industries.

In a retail-specific use case, AI-powered insights have enabled businesses to forecast demand and set pricing at an item-level for each store, rather than at a store-wide level. This has allowed retailers to implement dynamic pricing strategies, such as charging a premium for high-demand products in specific locations and strategically bundling complementary products to drive higher sales.

In healthcare, the company leveraged data and a digital assistant to improve the accuracy of pre-diagnosis from 40–50% to 93%. By integrating diagnostic data and AI-driven recommendations, the system identified high-risk cases for malignancy testing earlier and more accurately, significantly reducing diagnosis time from a week to just 10 minutes. 

Overall, Singh envisions a future driven by AI-powered assistants that go beyond task-based automation. These assistants will integrate into enterprise workflows, proactively surfacing insights from data, emails, and interactions to empower professionals with intelligent decision support.

Tredence is also piloting a centralised AI-driven knowledge repository that makes all internal intelligence—presentations, emails, and documents—instantly accessible to employees, enhancing collective learning and efficiency. Singh envisions a future where AI-driven digital employees become indispensable co-workers, not replacements. 

Much like past technological leaps that amplified human potential, these AI counterparts will elevate service delivery, working alongside humans to enhance efficiency, creativity, and decision-making. 

“If competition does it in six months, we will be able to do it in three months,” Singh concluded, emphasising Tredence’s commitment to speed, efficiency, and customer value.

]]>
‘Nobody Needs to Die of Breast Cancer’  https://analyticsindiamag.com/ai-features/nobody-needs-to-die-of-breast-cancer/ Fri, 21 Mar 2025 04:58:24 +0000 https://analyticsindiamag.com/?p=10166415 Niramai has developed an AI-driven solution which converts thermal images of the chest into cancer health reports.]]>

Breast cancer is one of the most common and life-threatening diseases affecting women worldwide. According to the WHO, in 2022 alone, around 2.3 million women were diagnosed with it, and 6,70,000 lost their lives. Despite medical advancements, breast cancer continues to pose a major health challenge, especially in low-resource regions where access to early detection and treatment is limited. 

Speaking at Rising 2025, Geetha Manjunath, managing director at Niramai Health Analytix, shared how she transitioned from a computer scientist to an entrepreneur after her cousin passed away due to breast cancer.

“One of my very close cousin sisters, a few years younger [to me], was diagnosed with late-stage breast cancer. That was extremely shocking,” she said. This personal experience motivated her to leave her corporate job and establish Niramai eight years ago.

Challenges in Breast Cancer Detection

Manjunath said that breast cancer is a major health concern, with approximately 2,000 deaths occurring worldwide daily. “Nobody needs to die of breast cancer. It is completely curable, but late detection leads to high mortality rates.” Noting that 50% of breast cancer deaths occur in Asia due to late diagnosis, she added, “96% of people go to a hospital only when they notice a lump, which is already a late stage.”

Manjunath revealed that traditional screening methods pose several challenges. Mammograms, which are the standard detection tool, are expensive, require skilled operators, and are recommended only for women above 45 years. “27% of cancer deaths happen under 45, and there is absolutely no test that is objective or standard for detecting breast cancer under 45 years today, anywhere in the world,” she explained.

Introducing Thermalytix

Niramai has developed a novel AI-driven solution called Thermalytix, which converts thermal images of the chest into cancer health reports. “We just measure the temperature variations using a thermal sensor, placed two and a half feet away, without any radiation or touch,” Manjunath described.

This non-invasive, privacy-friendly method uses AI algorithms to detect abnormal temperature patterns. “The AI processes thermal images and marks areas of concern, providing a report within minutes,” she said. Unlike mammograms, this technology works for women of all ages, from 18 to 80, making it widely accessible.

She mentioned that thanks to AI and the innovations associated with it, these screenings can now be provided in hospitals, outreach programmes, and corporate settings.

This is not the first time AI has been used to detect breast cancer. Earlier this year, researchers at the Massachusetts Institute of Technology (MIT) developed a deep learning system called ‘Mirai’ to predict breast cancer risk from mammograms. It gained attention as it can detect breast cancer five years before it develops.

Impact and Adoption

Several hospitals, including HCG, Apollo Clinic, and Narayana Health, have adopted Niramai’s technology. “We have also expanded internationally, with adoption in over 20 countries, including the US, Europe, and parts of Asia,” Manjunath stated. Niramai has received regulatory clearances from India, the European Union, and the United States, ensuring its global applicability.

Privacy and data security are crucial considerations. “We comply with ISO 27001, General Data Protection Regulation (GDPR), and Health Insurance Portability and Accountability Act (HIPAA) regulations to ensure data privacy and security,” she confirmed.

Future Prospects

Looking ahead, Niramai plans to extend its technology beyond breast cancer detection. “Why can’t we use the same technology for other abnormalities? Some doctors have already asked us to explore this,” she concluded.

]]>
Accenture’s Generative AI booking rise to $1.4 billion in Q2 FY25 https://analyticsindiamag.com/ai-news-updates/accentures-generative-ai-booking-rise-to-1-4-billion-in-q2-fy25/ Thu, 20 Mar 2025 13:04:58 +0000 https://analyticsindiamag.com/?p=10166407 In Q2 FY25, the consulting giant reported $16.7 billion in revenue. ]]>

Accenture’s Generative AI bookings totalled $1.4 billion in Q2 FY25, up from $1.2 billion in the last quarter, signalling a sustained momentum in the AI space. 

The company announced its financial results for the second quarter of fiscal 2025, ending February 28, 2025, on Thursday, and highlighted its broad-based revenue growth across markets, industries, and service lines.

In Q2 FY25, the consulting giant reported $16.7 billion in revenue. 

Julie Sweet, chair and CEO of Accenture, credited the company’s strategic focus on reinvention for its performance. “Our second quarter results demonstrate that we continue to deliver on our strategy to lead reinvention for our clients and return to strong growth in FY25, with broad-based growth across markets, industries, and the types of work our clients seek from us,” Sweet said.

She acknowledged that trust and confidence in their unique strengths and capabilities are reflected in 32 clients with quarterly bookings over $100 million. 

The company posted strong growth in cloud, security, and creative business divisions. 

In June 2023, the company reported $100 million in pure-play generative AI projects, and the sales have nearly doubled every quarter since. In fiscal 2024, the company invested about $6.6 billion in acquisitions to strengthen its position in the IT industry and bolster its AI mission. 


Accenture expects the annual revenue to grow between 5% and 7%, compared with its prior forecast of 4% to 7%, Reuters reported. In comparison, analysts had expected revenue growth of 5.7%

]]>
AI Search Will Define the Next Generation of Business—Here’s Why https://analyticsindiamag.com/ai-features/ai-search-will-define-the-next-generation-of-business-heres-why/ Thu, 20 Mar 2025 12:42:39 +0000 https://analyticsindiamag.com/?p=10166403 The AI Business Trends 2025 report by Google sheds light on how AI has changed the way the world discovers information and the benefits of enterprise search. ]]>

Whether searching for a resource on Google or looking for a particular favourite food from within an app, the presence of AI-powered searches is perceived nearly everywhere. From specialised AI search engines to advanced platforms designed to replace conventional search engines, the way information is discovered is being reshaped.

So, what about AI-powered searches geared towards enterprises? 

The AI Business Trends 2025 report by Google sheds light on how AI has changed how the world discovers information and the benefits of enterprise search.

Enterprise Search Market to Experience a Surge in Growth

The enterprise search market size is set to reach $12.9 billion by 2031. 

As per the Google report, the advanced AI-powered search capabilities now let users seek information in a way that mirrors how they naturally experience the world.

The AI-driven search tech includes site search, product search, and customer support self-service search. This is helping organisations enrich and optimise product data catalogues, save significant manual work, and improve conversion and cross-selling efficiency.

Prominent companies are already adopting AI-based search capabilities.

“Snap (Snapchat) deployed the multimodal capability of Gemini within their ‘My Al’ chatbot and has since seen over 2.5 times as much engagement within Snapping to My Al in the United States,” the report stated.

Not just limited to tech companies, hospitals like the Mayo Clinic have also benefited from such capabilities and have given thousands of its scientific researchers access to 50 petabytes worth of clinical data through Vertex AI search, facilitating information access across multiple languages.

Benefits of AI-Powered Search for Enterprise

Advanced search tools provide immense value to businesses. The report highlights three separate benefits: faster access to data, advanced and intuitive searches, and deeper AI-powered insights.

Regarding data access, enterprise search can help employees quickly and efficiently find and utilise internal data, boosting productivity. This should help them with more informed decision-making.

When it comes to intuitivity, users and employees can use complex queries and process various data formats (documents, spreadsheets, and multimedia) to get relevant information. One can replace multiple tools with the help of AI-powered searches.

The report highlighted that integrating AI agents with enterprise search will elevate knowledge retrieval significantly. These agents are capable of accessing and analysing company data, executing complex tasks, and providing valuable recommendations.

Meanwhile, Aashima Gupta, global director of healthcare strategy and solutions at Google Cloud, said, “We expect to see greater adoption of intuitive, contextual search that understands medical terminology, complex vocabulary, and abbreviations—helping relieve administrative burdens for medical professionals while improving patient education and research.”

Furthermore, Zac Maufe, managing director of regulated industries at Google Cloud, said, “We expect to see more financial institutions prioritising robust internal knowledge search for their employees, tailored to their specific roles. For example, a loan officer would receive different results than a risk analyst when searching for information about a particular loan application.”

“We expect GenAl will continue to transform search in retail, allowing customers to find products using natural language, images, or voice commands to deliver higher quality search results,” said Paul Tepfenhart, director of global retail strategy and solutions at Google Cloud.

Hence, it appears that the AI-powered search will impact industries, including finance, retail, and healthcare and life sciences.

The benefits of AI-powered search also extend beyond the enterprise. Companies that adopt these tools deliver new levels of service and support to their customers. 

For instance, Moody’s Corporation uses LLMs from Google Cloud to help employees sift through public documents and the firm’s database to write analyses. This not only improves employee efficiency but also enhances the quality of service provided to Moody’s clients.

Evolution of Search and the Path Forward

Building a robust search system is a complex task, whether it is for Google or any other company. 

The report states that before generative AI, enterprise search systems were keyword-based and often delivered irrelevant results, leading to frustrating user experiences. Going forward, businesses can integrate LLMs into their legacy systems to improve search accuracy and relevance.  

While building AI-powered search systems can be challenging, companies like Google are trying to make it easier. These solutions remove the complexity from search systems, making it easier for companies to implement and benefit from AI-powered search.  

AI-powered search is revolutionising how businesses operate and interact with their customers. By making knowledge discovery faster, more intuitive, and more relevant, AI transforms enterprise search into a powerful tool for innovation, growth, and enhanced customer service. As AI technology continues to evolve, we can expect more drastic changes.

]]>
Cadence, NVIDIA Extend Partnership for Accelerated Computing and Agentic AI https://analyticsindiamag.com/ai-news-updates/cadence-nvidia-extend-partnership-for-accelerated-computing-and-agentic-ai/ Thu, 20 Mar 2025 11:34:11 +0000 https://analyticsindiamag.com/?p=10166392 Cadence will leverage NVIDIA’s Blackwell architecture for engineering and scientific solutions. ]]>

Cadence Design Systems, a leading computational software company, has announced that it is expanding its multi-year collaboration with NVIDIA, focusing on accelerated computing and agentic AI

This partnership addresses global technology challenges by driving innovation across various industries and involves Cadence leveraging NVIDIA’s latest Blackwell architecture to accelerate its engineering and scientific solutions. 

This includes reducing computational fluid dynamics simulation times by up to 80 times, from days to minutes, and accelerating the Cadence Spectre X Simulator by up to 10 times.

Jensen Huang, NVIDIA CEO, noted, “Accelerated computing and agentic AI are setting new standards for innovation across industries.” 

Using its Fidelity CFD Platform, Cadence also successfully ran multi-billion cell simulations on NVIDIA GB200 GPUs in under 24 hours. It would have previously required a top 500 CPU cluster with 100,000 cores and several days to complete. 

The company expressed that it will continue to leverage Blackwell for simulation and help the aerospace industry reduce the amount of wind tunnel tests by reducing cost and expediting time to market.

New Era for Accelerated Computing

Additionally, the partnership involves the companies working together on a full-stack agentic AI solution for electronic and system design, as well as science applications. This will integrate Cadence’s JedAI Platform with NVIDIA’s NeMo generative AI framework and the Llama Nemotron Reasoning Model. 

Anirudh Devgan, president and CEO at Cadence, says, “We’re enabling the delivery of today’s infrastructure AI and agentic AI and transforming the principled simulations that underpin physical AI and sciences AI.”

The collaboration is expected to transform industries by enabling complex simulations that were previously impossible, driving efficiency, and fueling scientific discovery. It will also deliver breakthroughs in simulation, optimisation, and design. 

In his keynote address at the NVIDIA 2025 GTC summit, he mentioned that until now, the giant had been using general-purpose computers running software super slowly to design accelerated computers for everybody else. 

But with the entry of optimised CUDA software, “now our entire industry is going to get supercharged as we move to accelerated computing.”

Cadence Molecular Sciences (OpenEye) is also integrating NVIDIA BioNeMo NIM microservices with its cloud-native molecular design platform, Orion. Cadence has also been one of the first adopters of NVIDIA Omniverse Blueprint for AI factory digital twins. 

Cadence and NVIDIA are leading the way in creating an ecosystem of high-quality models, allowing equipment manufacturers and data centre companies to quickly create digital twins.

]]>
Bengaluru-Based SkyServe Collaborates with NASA and D-Orbit for Earth Observation https://analyticsindiamag.com/ai-news-updates/bengaluru-based-skyserve-collaborates-with-nasa-and-d-orbit-for-earth-observation/ Thu, 20 Mar 2025 11:30:27 +0000 https://analyticsindiamag.com/?p=10166387 This involves optimising and deploying AI models developed under NASA’s New Observations Strategies initiative.]]>

Bengaluru-based space tech company SkyServe announced that it is working with scientists at NASA’s Jet Propulsion Laboratory (JPL) and D-Orbit, a space logistics company, to advance Earth observation systems. 

The collaboration involves optimising and deploying AI models developed under NASA’s New Observations Strategies (NOS) initiative on D-Orbit’s ION Satellite Carrier

This initiative aims to create a unified network of space-borne, terrestrial, and airborne sensors for near-real-time monitoring phenomena like wildfires, floods, and urban heat islands. The company ultimately aims to support both scientific research and disaster response efforts.

The collaboration also involves harnessing edge computing to integrate and optimise AI models across diverse sensor configurations. 

SkyServe’s STORM, an edge-computing platform for satellites, enables the deploying of advanced AI applications in orbit. Complementing STORM, the SURGE software suite provides an on-ground environment to test, develop, and deploy AI models. 

Vishesh Vatsal, CTO at SkyServe, noted, “SkyServe’s technology plays a pivotal role in streamlining AI model deployment across diverse satellite platforms, ensuring consistency and efficiency.” 

D-Orbit’s ION satellite platform supports these efforts with a robust infrastructure. This collaboration also highlights the integration of Edge AI solutions with satellite platforms to address Earth observation challenges. 

The collaboration aims to enhance emergency response times and accelerate decision-making by enabling a wide range of use cases, such as detecting wildfires and tracking unregistered ships. 

By simulating space-embedded environments, SURGE ensures AI models maintain operational consistency across varied satellite platforms and computing environments. 

This collaboration is part of broader efforts to redefine the possibilities of space innovation and drive advancements in global monitoring.

In May last year, SkyServe also announced that it had successfully achieved Smart Earth Imaging in orbit, marking an important step forward in Earth observation. It demonstrated the ability to generate actionable insights from space in a fraction of the time.

A month before, it had also collaborated with D-Orbit to deploy STORM on a SpaceX-launched satellite. Within seconds of capturing imagery over the Egypt-Sinai Peninsula, STORM performed intelligent tasks onboard. 

These included error correction, cloud/water removal, and vegetation identification. The optimised data was then transmitted back to Earth compressed by 5X.

]]>
Michael Seibel Steps Down from Y Combinator After 12 Years https://analyticsindiamag.com/ai-news-updates/michael-seibel-steps-down-from-y-combinator-after-12-years/ Thu, 20 Mar 2025 11:19:52 +0000 https://analyticsindiamag.com/?p=10166384 Before YC, Seibel co-founded Justin.tv (later Twitch, sold to Amazon) and Socialcam (sold to Autodesk).]]>

Michael Seibel steps down from the San Francisco-based Y Combinator (YC) as a group partner. After spending a decade at the accelerator, Seibel said he would move into a partner emeritus role. “The next adventure I’m excited to pursue is how I can help the government better serve its citizens,” he posted on X

Before joining YC, Seibel co-founded Justin.tv, which evolved into Twitch and was acquired by Amazon in 2014 for $970 million, and Socialcam, sold to Autodesk in 2012 for $60 million. 

During his time at YC, he served in various capacities, including as founder, group partner, managing director, and CEO of the accelerator. 

President and CEO Garry Tan shared a blog highlighting Seibel’s lasting impact on YC’s values, programs, and founder support.

Also, YC recently celebrated its 20th anniversary. Founded on March 11, 2005, by Paul Graham, Jessica Livingston, Trevor Blackwell, and Robert Tappan Morris, YC has played a pivotal role in nurturing early-stage startups, leading to the creation of over $800 billion in market value. Past YC alumni include Stripe, Airbnb, and Reddit.

YC held its Winter 2025 Demo Day on Wednesday, showcasing 160 new startups.

YC has shifted its focus from funding early-stage internet startups to AI startups. Group partner Jared Friedman spoke about the influence of AI within the startups as well. He said that one-quarter of the YC founders admitted that over 95% of their codebase was AI-generated. He pointed out that these were highly skilled founders who, just a year ago, would have built their products entirely on their own—but now, AI does the heavy lifting.

]]>
Narayana Health Proves You Don’t Need Excel to Build a Data-Led Masterpiece https://analyticsindiamag.com/ai-features/how-narayana-health-built-a-data-led-strategy-without-excel-sheets/ Thu, 20 Mar 2025 09:30:00 +0000 https://analyticsindiamag.com/?p=10166373 “After 20 years of my career, I’m taking over as CFO of a listed company with no Excel sheets. I thought they were setting me up to fail,” Sandhya Sriram said.]]>

When Sandhya Sriram took over as the group chief financial officer at Narayana Health, she expected to rely on the same Excel sheets that had been her lifeline throughout her career in finance. However, there were none. Instead, everything was integrated into a data tool called Medha. 

“After 20 years of my career, I’m taking over as CFO of a listed company with no Excel sheets. I thought they were setting me up to fail,” Sriram explained while talking about the data-led strategy at Narayana Health at The Rising 2025

She revealed what she didn’t realise at the time was that Narayana Health was already ahead of its time—operating in a future where data-driven decisions had replaced the tedious manual processes of the past.

Letting Go of the Old Ways

For finance professionals, control often comes from managing numbers in Excel and PowerPoint. At Narayana Health, however, financial reviews weren’t prepared over weeks of back-and-forth data collection. Instead, everything was dynamically available on Medha. “No PPTs, no firefighting, just clean data in real-time,” Sriram said.

This transformation wasn’t just about convenience—it was about precision. The platform enabled real-time tracking of revenue, cost efficiencies, and operational bottlenecks. “Healthcare revenue isn’t something you can manipulate—it depends on patient inflow. But costs and quality? Those we can control,” she explained.

Narayana Health was founded by Dr Devi Shetty with a mission to make healthcare more affordable. One of the most impressive ways they’ve achieved this is by using AI and predictive analytics to manage inventory. 

“In FMCG, inventory write-offs are common. But when I joined Narayana, I found that we hadn’t taken a single inventory write-off for three years,” she said. This was because Medha could predict which pharma inventory was about to expire and ensure it was used in time.

Predictive AI also played a crucial role in operational efficiency. For example, Medha optimised the use of operating rooms by analysing patterns and suggesting scheduling improvements. “Every digital intervention has a cost—not just in money, but also in change management. People resist new systems, so we had to ensure that every investment in tech delivered real returns.”

Medha Does it All

Beyond operations, Medha has transformed Narayana Health’s marketing efforts. “Marketing used to be a black box—you spent money and hoped for the best. Today, digital tools allow us to track ROI for every rupee spent,” Sriram explained. Finance teams that once struggled to quantify the impact of marketing spend can now measure it with data-backed key performance indicators (KPIs).

Moreover, budgeting has also evolved. Instead of pulling data from different sources manually, Medha runs detailed financial scenarios at every level. “As a CFO, I want to know: what if revenue dips by 5%? What if a cost centre overruns? Medha lets me test these scenarios instantly,” Sriram explained. 

The ability to simulate financial outcomes in real time gives leadership a significant advantage in planning and risk management.

Building a Data-Centric Culture

For any data initiative to succeed, leadership buy-in is crucial. Narayana Health ensured that business leaders—not just the tech team—owned the digital transformation. “Every rupee invested in data analytics had to deliver a tangible impact, whether in cost reduction, revenue growth, or operational efficiency,” she emphasised.

However, some challenges remained. “Initially, every operational committee review had someone questioning data accuracy. But we made a rule—no external data sources would be entertained. The numbers had to come from Medha, even if there were issues. We had to trust the system for it to work,” she said.

Narayana Health operates with a revenue per patient significantly lower than competitors, yet its profitability remains strong. “We don’t focus on extracting maximum revenue from patients. We focus on running an efficient operation. Data is our key enabler,” she said. 

The approach has been recognised globally, including a Harvard Business School case study and a Netflix feature on Narayana Health’s model.

At the core of this transformation is Medha, Narayana Health’s in-house data team. “They aren’t just building dashboards; they’re driving impact. Every new dashboard must show measurable value, whether in cost savings, revenue improvements, or productivity gains,” Sriram concluded.

By eliminating reliance on Excel sheets, shifting financial decision-making to real-time analytics, and embedding a data-driven culture, Narayana Health has set a new standard for how businesses—especially in healthcare—can leverage technology to drive efficiency and affordability. 

Moreover, for CFOs like Sriram, it’s proof that, sometimes, letting go of old ways is the only way forward.

]]>
LinkedIn Reveals India’s Top Skills for 2025, AI Literacy Takes Lead https://analyticsindiamag.com/ai-news-updates/linkedin-reveals-indias-top-skills-for-2025-ai-literacy-takes-lead/ Thu, 20 Mar 2025 09:24:55 +0000 https://analyticsindiamag.com/?p=10166381 As AI automates routine tasks, human-centric skills like creativity, innovation, problem-solving, and strategic thinking are becoming increasingly valuable. ]]>

LinkedIn, the world’s largest professional networking platform, has unveiled its ‘Skills on the Rise’ ranking, identifying the top 15 skills professionals need to stay competitive in the evolving job market in 2025. In India, the fastest-growing skills include creativity and innovation, code review, problem-solving, pre-screening, and strategic thinking. 

With 64% of job-related skills expected to change by 2030, LinkedIn research highlights that 25% of professionals are concerned about their future skills, 60% are open to shifting industries, and 39% plan to acquire new skills. Moreover, 69% of recruiters report a gap between candidate skills and business requirements.

AI literacy is emerging as an essential competency across industries. Malai Lakshmanan, head of India engineering at LinkedIn, emphasised, “AI literacy is becoming essential, with our latest data showing a growing demand for skills like large language models (LLMs) and prompt engineering for tech roles in India. At the same time, foundational engineering strengths, such as software design and code review, remain critical for building high-quality, scalable solutions.” 

He mentioned that we are at a pivotal moment where mastery of core technology skills like design and coding is just as important as new-age AI capabilities for engineers. Professionals who combine these strengths will be best prepared to thrive in the agentic AI future that lies ahead.

As AI automates routine tasks, human-centric skills like creativity, innovation, problem-solving, and strategic thinking are becoming increasingly valuable, not just in marketing and design but also in business development and education. Communication, once primarily linked to sales and HR, is now considered critical in IT, consulting, and finance. 

AI skills are transitioning from an advantage to a baseline expectation, with 95% of Indian C-suite leaders prioritising AI proficiency over traditional experience. Skills like LLMs, AI literacy, and prompt engineering are now in high demand across industries beyond tech, including education and marketing.

Meanwhile, businesses are placing a stronger focus on customer engagement, a skill that has gained importance in sales, business development, and marketing, as companies seek to strengthen brand loyalty and foster lasting customer relationships.

Nirajita Banerjee, LinkedIn’s senior managing editor for India, advises professionals to assess their existing skillsets, highlight soft skills prominently on their LinkedIn profiles, and actively pursue learning opportunities through online courses, stretch assignments, or volunteer work. Data suggests that members who list five or more skills on their profiles receive significantly higher profile views and recruiter engagement.

LinkedIn’s List of Emerging Jobs for 2025

LinkedIn recently released a list of jobs that have gained prominence due to AI redefining hiring practices in 2025. This highlights the need for individuals to upskill and embrace new opportunities to remain competitive in the job market. 

Among the most sought-after roles is the AI engineer, who is responsible for designing and training AI models and algorithms to address complex problems and enhance system performance.

]]>
How Commonwealth Bank of Australia Built a GenAI Chatbot in Just 6 Weeks https://analyticsindiamag.com/global-tech/how-commonwealth-bank-of-australia-built-a-genai-chatbot-in-just-6-weeks/ Thu, 20 Mar 2025 08:00:00 +0000 https://analyticsindiamag.com/?p=10166377 While the initial focus was improving internal operations, the bank has now extended these capabilities to customer-facing solutions.]]>

The Commonwealth Bank of Australia (CBA) is integrating generative AI across its operations to improve customer service, simplify processes, and strengthen security. Speaking at the The Rising 2025, India’s biggest summit on women and tech in AI, Nidhi Sinha, general manager, chief data analytics office (CDAO) at CBA India, revealed that the bank built its generative AI chatbot, CommBank, in less than six weeks.

“A lot of organisations are experimenting with generative AI,” Sinha said, “But we are using it to lift and accelerate what we are currently doing.” She added that while the initial focus was improving internal operations, CBA has now extended these capabilities to customer-facing solutions.

One such initiative was launched in November for business banking. Business customers, who often have complex queries about products and payments, previously had to navigate through 80 different FAQ documents or contact call centres, resulting in delays. 

“We have embedded the generative AI solution within the app,” Sinha explained. “It fetches data from all these documents, allowing customers to complete transactions without leaving the app.” The chatbot understands context and provides relevant answers, reducing the need for external searches or call centre support. “Best of all, this entire solution was developed in just six weeks, from infrastructure provisioning to deployment.”

She added that CBA built this solution on AWS, one of its strategic partners. “The pace of change has never been this fast, yet it will never be this slow again,” Sinha remarked, quoting an internal mantra that reflects the rapid advancements in AI adoption. 

Notably, CommBank entered a five-year strategic collaboration with Amazon Web Services (AWS) earlier this year to continue as the bank’s preferred cloud provider. 

The bank currently has over 60 generative AI use cases, with many already live for both customers and internal users. These solutions drive efficiency and improve customer interactions.

To support AI-driven transformation, CBA has established the GenAI Council, a leadership body that includes senior executives and the CEO and is focused on AI acceleration. 

“A combination of oversight and a combination of federation is really helping us accelerate at a very, very fast pace,” she further said. “Different teams run with their own use cases, while central oversight ensures alignment and scalability.”

CBA’s investment in AI spans more than a decade. “We have been recognised as the number one bank in AI in the Asia-Pacific region for two consecutive years and globally as the best in responsible AI,” Sinha said. “We are also working with the Australian government to develop AI principles for the country.”

She further shared that over 60% of the population in Australia uses CBA in some form, with one-third considering it their primary financial institution. 

The bank has prioritised responsible AI, integrating governance frameworks to ensure safe implementation. “For example, checks on groundedness prevent hallucinations, profanity filters maintain appropriate interactions, and jailbreaking safeguards ensure models are not misused. These controls are centrally managed and available for all teams.”

CBA is also accelerating its data strategy, moving from on-premise to cloud. “Initially, this was planned for 18 months, but by leveraging AI, we are completing it in nine months,” Nidhi said. 

“By June, all our data will be on the cloud, providing practitioners with high-quality data to accelerate use cases.”

Collaboration is key to CBA’s AI journey. “We have world-class partnerships with AWS, Microsoft, and Anthropic, giving us access to cutting-edge AI capabilities and top talent,” she said. “These partnerships are critical in our generative AI journey.”

AI-Driven Financial Management

One of CBA’s AI-driven solutions focuses on financial management. The bank uses predictive analytics to help customers manage their finances effectively. “We have a product where we use AI to predict future cash flows and nudge our customers at the right time,” Sinha said. This proactive approach allows customers to plan better and avoid financial stress.

CBA also applies AI to disaster response. Given Australia’s exposure to natural disasters, the bank integrates external weather data with its customer information. “We use data and AI to identify customers and communities proactively who would be impacted by some calamity, and we reach out to them to provide our help,” she said.

Personalised Customer Engagement

CBA has been personalising customer interactions for over a decade. “In 2015, we launched something we call the Customer Engagement Engine,” Sinha shared. This AI-driven platform connects all customer interaction channels—mobile banking, branch visits, and call centres—to offer real-time recommendations.

“When a customer comes into any of these channels, there is something called next best conversation (NBC) that is surfaced to them,” she explained. “It could be about an offer, a service, or even a simple happy birthday message.” 

She added that the system processes over 3.1 trillion data points and runs 2,000 adaptive models to ensure relevance.

This AI-driven approach has also influenced customer engagement through CBA’s loyalty program, Yello. “What we have seen in the last five years is that customers engaged with this program log into our app, on average, 67 times a month—twice a day.”

AI for Fraud Prevention

CBA has made significant strides in fraud prevention using AI. “With AI evolving, threat actors and scammers also have access to these technologies, and fraud is increasing globally,” Sinha warned. She cited global statistics showing a $1 trillion loss due to fraud last year.

However, CBA has successfully reduced scams by 70% in the past two years. “This is a big achievement, though we aim to do more,” she explained. The bank monitors 20 million transactions daily, detecting fraudulent activity in real time. “Within 10 milliseconds, an alert is sent to the customer, allowing them to take immediate action.”

CBA has also introduced features like NameCheck and CallerCheck to prevent mistaken or fraudulent transactions. “This has actually helped us save $650 million per customer,” she shared.

Preventing Payment Abuse

One of the more unique AI applications at CBA involves preventing abuse through payment messaging. “We discovered that some individuals were misusing the payment description field to send abusive messages,” Sinha said.

In response, CBA launched a profanity blocker in 2021. “At the time of payment, if abusive words are detected, the transaction is blocked immediately,” she explained. However, the challenge went beyond explicit words. “Even simple phrases like ‘I love you’ can be threatening in certain contexts,” she said. The bank has since developed AI models to detect harmful intent, ensuring payments are not used as a tool for harassment.

Talent Development 

Another focus area is talent development. “People are the cornerstone of our AI initiatives,” Sinha said. “Five years ago, we set up CBA India, and today, we have 46% workforce diversity, with 41% representation in leadership roles…An inclusive workforce enables us to understand customers better and think beyond traditional banking.”

CBA’s AI journey is not just about technology but also about fostering a culture of experimentation, innovation, and responsibility. “We are creating a model where people raise their vision to do more with AI,” Sinha concluded. 

]]>
Researchers Unveil AudioX—AI Model That Converts Anything to Audio, Music https://analyticsindiamag.com/ai-news-updates/researchers-unveil-audiox-ai-model-that-converts-anything-to-audio-music/ Thu, 20 Mar 2025 06:37:08 +0000 https://analyticsindiamag.com/?p=10166366 A new research paper introduces AudioX as a diffusion transformer model that enables various audio generation capabilities.]]>

Researchers from the Hong Kong University of Science and Technology and Moonshot AI have teased a new AI model called AudioX, that generates audio and music using multimodal inputs.

AudioX is described as a unified model offering flexible natural language control and seamless processing of inputs that include text, video, image, music, and audio. This differs from the standard domain-specific models that typically focus on a single modality or a limited set of input conditions.

The research paper mentioned use cases like text-to-audio, text-and-video-to-audio, and video-to-audio with AudioX. Notably, the AI model also lets one refine existing audio through a text prompt, improve unprocessed music, and generate music from scratch.

Netizens seem excited about the demo of the model shared on the model’s GitHub repo, highlighting interesting use cases like generating audio for a tennis video:

The researchers mentioned that they aim to address the scarcity of high-quality multi-modal data, which has been a major bottleneck in the development of versatile audio generation systems. To tackle this, they curated two comprehensive datasets: vggsound-caps, with 190K audio captions based on the VGGSound dataset, and V2M-caps, with 6 million music captions derived from the V2M dataset.

“Extensive experimental results show that AudioX not only excels in intra-modal tasks but also significantly improves inter-modal performance, highlighting its potential to advance the field of multi-modal audio generation,” the research paper stated.

Currently, the code for the model is not available. The researchers mentioned it would be available on the GitHub page without specifying a timeframe or licence details.

There are various text-to-music models and some text-to-speech models available, which have seen creative use cases in the AI space. It remains to be seen how AudioX opens up more possibilities.

]]>
What Was Former Intel CEO Doing at NVIDIA’s Flagship Event? https://analyticsindiamag.com/global-tech/what-was-former-intel-ceo-doing-at-nvidias-flagship-event/ Thu, 20 Mar 2025 06:36:06 +0000 https://analyticsindiamag.com/?p=10166368 Pat Gelsinger disagrees with Jensen Huang on quantum computing.]]>

At NVIDIA’s GTC 2025 event on Tuesday, the company delivered a variety of new advancements across AI hardware, personal supercomputers, self-driving cars, and humanoid robots. Moreover, the event took an unexpected turn when an unlikely guest made an appearance.

Surely, if Pat Gelsinger was still the CEO of Intel, there’s no way he’d be seen mingling with CEO Jensen Huang at an NVIDIA event. That said, Gelsinger certainly didn’t hold back and offered a few strong takes on the industry. 

He participated in a panel discussion alongside the hosts of the Acquired podcast and several other industry experts. While Gelsinger applauded NVIDIA’s accomplishments in the present era of AI, he disagreed with Huang on certain key issues—specifically, the timeline for the arrival of quantum computing and the use of GPUs for inference. 

‘Data Centres Will Have CPUs, GPUs, and QPUs’

Gelsinger, who is notably bullish on quantum computing, stated that it could be realised within the next few years. 

This stands in contrast to Huang’s comments earlier this year, where he said that bringing “very useful quantum computers” to market could take anywhere from 15 to 30 years. His statements triggered a massive selloff in the quantum computing sector, wiping out approximately $8 billion in market value. 

“I disagree with Jensen,” said Gelsinger, adding that the data centres of the future will have quantum processing units (QPUs) handling workloads, along with GPUs and CPUs. 

Similar to how GPUs are deployed to handle tasks for training AI models in language and human-like behaviour, Gelsinger believes it is only appropriate to have a quantum computing model for the complex parts of humanity. “Most interesting things in humanity are quantum effects,” he said. 

He added that many unsolved problems today run on quantum effects, and quantum computers would help realise many ideas like superconducting, composite materials, cryogenics and medical breakthroughs, among others.

“That’s why this is a thrilling time to be a technologist. I just wish I was 20 years younger to be doing more,” he said. 

While Gelsinger differs from Huang, he shares an optimistic view with Microsoft co-founder Bill Gates and Google

“There is a possibility that he (Huang) could be wrong. There is the possibility in the next three to five years that one of these techniques would get enough true logical qubits to solve some very tough problems,” said Gates to Yahoo Finance. 

Besides, even Microsoft and Amazon have already taken major strides in quantum computing within the first three months of the year. On the flipside, Meta CEO Mark Zuckerberg resonated with Huang. “My understanding is that [quantum computing] is still ways off from being a very useful paradigm,” Zuckerberg had said in a podcast episode a few months ago. 

Ironically, NVIDIA does seem to have huge plans for quantum computing. The company announced at the GTC event that it is building a Boston-based research centre to advance quantum computing

‘Huang Got Lucky With AI’

Besides, Gelsinger clarified that he isn’t a fan of GPUs for AI model inference—the process in which a pre-trained AI model applies its learnings to generate outputs.

He reflected on the early days when a CPU, or a cluster of them, was the undisputed “king of the hill” for running workloads on computer systems. When Huang decided to use a graphics device (GPU) for the same purpose, Gelsinger said that, in the end, he “got lucky” with AI. 

While he acknowledged that AI and machine learning algorithms demand the GPU architecture, which is where most of the developments are being made today, he also pointed out, “There’s a lot more to be done, and I’m not sure all of those are going to land on GPUs in the future.” 

While GPUs work well for training, Gelsinger added that there needs to be a more optimised solution for inference. “A GPU is way too expensive. I argue it’s 10,000 times too expensive to fully realise what we want to do with the deployment of inference of AI.” 

His sentiments are also reflected by the growing ecosystem of inference-specific hardware that is overcoming the inefficiencies posed by GPUs. Companies like Groq, Cerebras, and SambaNova have achieved tangible and useful real-world results for providing high-speed inference. 

For instance, French AI startup Mistral recently dubbed its app ‘Le Chat’ the fastest AI assistant by deploying inference on Cerebras’ hardware. 

Even Huang has acknowledged this in the past. In a podcast episode last year, he said that one of the company’s challenges is to provide efficient, high-speed inference. Having said that, companies working on AI inference hardware may not compete with NVIDIA after all.  

Jonathan Ross, CEO of Groq, said, “Training should be done on GPUs.” He also suggested that NVIDIA will sell every single GPU they make for training. 

All things considered, Gelsinger’s first outing post-resignation involved several strong statements. However, it remains clear that he’s still a massive fan of Huang and the work NVIDIA has accomplished. 

When DeepSeek made a significant impact on NVIDIA’s stock price, Gelsinger argued that the market reaction was wrong. He also revealed that he is an NVIDIA stock buyer, expressing that he was “happy” to benefit from the lower prices. 

]]>
India’s First AI Unicorn Fractal Invests $20 Million in Asper.AI https://analyticsindiamag.com/ai-news-updates/indias-first-ai-unicorn-fractal-invests-20-million-in-asper-ai/ Wed, 19 Mar 2025 15:16:09 +0000 https://analyticsindiamag.com/?p=10166362 The investment aims to enhance Asper.AI’s autonomous growth AI platform and expand its product offerings of AI solutions.]]>

Fractal, a global provider of enterprise AI solutions, has announced a $20 million strategic investment in Asper.AI, one of its product companies focused on AI-driven growth solutions for consumer goods and manufacturing. The company said the investment will support Asper’s expansion, enhance product development, and scale its enterprise customer base globally.

Founded with the goal of transforming decision-making in enterprise growth, Asper.AI leverages AI across four key areas, including demand forecasting and planning, revenue growth management, inventory planning, and sales execution. 

The company operates across Bengaluru, New York, London, and San Francisco, focusing on delivering interconnected, AI-powered decisions that enhance business performance.

Pranay Agrawal, co-founder and CEO of Fractal, emphasised Asper’s rapid progress and said, “Asper has demonstrated exceptional growth and innovation in just three years. We are thrilled to continue our partnership with Asper’s team to drive the next phase of growth. The phased investment will fuel Asper’s vision, unlocking new opportunities for enterprise customers.”

Meanwhile, Asper.AI’s co-founder and CEO, Mohit Agarwal, highlighted that more than AI, consumer goods need an ally that scales with their operations, speaks their language, and turns data into actionable decisions. 

“This investment from Fractal enables us to enhance our autonomous growth AI platform, attract top talent, and expand our product offerings to meet the increasing demand for cutting-edge AI solutions,” he added. 

The company expects that developing such expertise internally demands substantial investment in AI talent, infrastructure, ongoing model training, and deep domain knowledge. 

Fractal, founded in 2022, is known for driving AI adoption among Fortune 500 companies. It has built a portfolio of AI-driven businesses, including Asper.AI, Flyfish (a GenAI platform for search and product discovery), and Analytics Vidhya (a leading data science community). It also incubated Qure.ai, a healthcare AI player focused on tuberculosis, lung cancer, and stroke detection. 

]]>
Google.org Backed Rocket Learning Launches AI Tutor Appu  https://analyticsindiamag.com/ai-news-updates/google-org-backed-rocket-learning-launches-ai-tutor-appu/ Wed, 19 Mar 2025 15:00:19 +0000 https://analyticsindiamag.com/?p=10166359 Appu was built in six months with technical assistance from Google.org Fellows and a $1.5 million grant awarded in 2023.]]>

Delhi-based Rocket Learning launched ‘Appu,’ an AI-powered tutor designed to provide personalised and conversational learning experiences for children aged 3 to 6 on Wednesday. 

Developed with support from Google.org, the philanthropic arm of the tech giant, the startup aims to reach 50 million families by 2030, beginning with Hindi and expanding to 20 other languages, including Marathi and Punjabi. While Appu is already being piloted, Rocket Learning plans for wider adoption in Anganwadi centers and government-run preschools.

The company, founded in 2020, is led by Azeez Gupta, Vishal Sunil, Namya Mahajan, Siddhant Sachdeva, and Utsav Kheria. The organisation uses technology to bridge learning gaps in early childhood education. Gupta, a Harvard Business School graduate, co-founded the startup with the mission of democratising learning. Sunil, the company’s CTO, has been recognised in Forbes 30 Under 30 for his contributions to education technology.

AI for Education

“Appu is the future of learning—AI-driven, intuitive, and designed to evolve,” said Sunil. “We see adaptability as our greatest strength and personalization as the catalyst for unlocking every child’s potential. With 85% of brain development happening by age six, early childhood education is the next frontier in human capital.”

The AI tutor was built in six months with technical assistance from Google.org Fellows and a $1.5 million grant awarded in 2023 under the AI for Global Goals Impact Challenge. Appu leverages LLMs to deliver real-time, engaging lessons focused on pre-literacy, numeracy, and social-emotional skills. 

According to the National Sample Survey 75th Round report (2022), more than 40 million children in India lack access to quality preschool education, with many struggling with basic literacy and numeracy by the second grade. Rocket Learning is looking to bridge this gap through AI-based training, among other initiatives. 

Annie Lewin, Senior Director at Google.org, emphasised the significance of the project and said, “We look forward to seeing how this innovation helps shape the future of learning, as they scale their work to 20 additional languages.”

Beyond its immediate impact on education, Rocket Learning’s initiatives, including Appu, are projected to contribute $4 billion in lifetime value to India’s economy. 

Earlier this year, Google.org launched its Generative AI Accelerator program with a $30 million budget. The initiative aims to support nonprofit organisations leveraging generative AI to create widespread impact. The six-month program offered technical training, Google Cloud credits, and pro-bono assistance from Google employees, along with a share of the $30 million funding.

]]>
Tech Mahindra, Wipro Individually Partner With NVIDIA at GTC 2025 https://analyticsindiamag.com/ai-news-updates/tech-mahindra-wipro-individually-partner-with-nvidia-at-gtc-2025/ Wed, 19 Mar 2025 14:35:43 +0000 https://analyticsindiamag.com/?p=10166354 While Tech Mahindra aims to enhance drug safety with NVIDIA, Wipro has launched sovereign AI services.]]>

Tech Mahindra and NVIDIA have announced a collaboration to develop an autonomous pharmacovigilance solution for improving drug safety management. According to the company blog, the solution uses Tech Mahindra’s TENO framework alongside NVIDIA AI Enterprise software, including NeMo, NIM microservices, and AI Blueprints.

The solution is being showcased at NVIDIA’s GTC 2025.

“AI is ideal for monitoring medicines throughout their lifecycle to support safety. Integrating AI into the Tech Mahindra TENO framework with NVIDIA AI Enterprise software enhances pharmacovigilance by augmenting human capabilities to help identify potential safety issues more effectively,” said John Fanelli, vice president of enterprise software at NVIDIA.

The goal is to reduce the risk of human error. The AI system automates pharmacovigilance workflows, handling case intake, data transformation, quality control, and compliance management. In this system, AI agents classify, prioritise, and verify pharmacovigilance emails. 

According to the companies, the solution can reduce turnaround times by up to 40%, increase data accuracy by 30%, and lower operational costs by 25%. The system processes adverse drug reaction (ADR) cases and supports regulatory compliance through autonomous decision-making.

Nikhil Malhotra, chief innovation officer at Tech Mahindra, said the collaboration with NVIDIA will help the pharmaceutical industry manage large volumes of data more efficiently. By applying generative AI and multi-agent systems, they will improve drug safety.

“Together, we are revolutionising drug safety management and using the innovative AI-driven framework to develop multiple use cases for our global customers,” Malhotra said 

Alongside Tech Mahindra, Wipro has also launched sovereign AI services with NVIDIA to help governments and businesses develop country-specific AI solutions. The services use Wipro’s WeGA Studio and NVIDIA AI software to support local language models, data privacy, and AI governance. The applications include healthcare, banking, education, and emergency services, focusing on data security and sovereignty.

]]>
Google Brings New Features to NotebookLM and Gemini https://analyticsindiamag.com/ai-news-updates/google-brings-new-features-to-notebooklm-and-gemini/ Wed, 19 Mar 2025 12:36:12 +0000 https://analyticsindiamag.com/?p=10166350 Google has been announcing major updates across all its AI products.]]>

Tech giant Google rolled out NotebookLM’s new mind map feature on Wednesday, allowing users to see a visual summary from any source document. The topics and their related ideas are represented as a branching diagram.

Simon Tokumine, director of product management at Google Labs, took to X to make this announcement. 

As per the official documentation, one can use the mind maps feature when trying to understand the big picture of the source material, explore unfamiliar information, connect the dots, and get a structure of the information.

To generate a mind map, one needs to open an existing notebook (or create a new one). Once the source is analysed, a “Mind Map” button will appear, generating a mind map note to see the visual summary. Furthermore, one can interact with the Mind Map by zooming in/out, scrolling, expanding/collapsing branches, and clicking on the nodes to ask questions in NotebookLM chat.

The Mind Map can also be downloaded and shared as an image file. The feature will be available globally in the next seven days and should be available in some countries presently. AIM was able to access the feature, and it worked as expected.

Besides this, the company also added a new feature to Gemini, Canvas, that provides an interactive space for refining documents and code. It also enabled support for Audio Overview in Gemini. 

A user needs to select ‘Canvas’ in the prompt bar and start writing/editing documents, or code, with changes appearing in real-time. The feature also supports previews for HTML/React code.

To collaborate, one can export the output to Google Docs and share it with others.

Google has been announcing major updates across all its AI products, like Gemma 3, and new models tuned for robotics, and it appears to be getting more serious with AI updates.

]]>
Dexterity AI Launches the ‘World’s First Industrial Superhumanoid Robot’ https://analyticsindiamag.com/ai-news-updates/dexterity-ai-launches-the-worlds-first-industrial-superhumanoid-robot/ Wed, 19 Mar 2025 12:30:41 +0000 https://analyticsindiamag.com/?p=10166346 The robot, Mech, can autonomously navigate industrial sites and perform repetitive, physically demanding tasks.]]>

California-based physical AI and robotics company Dexterity AI has launched Mech, which it claims is the first industrial superhumanoid robot designed to transform enterprise operations. 

The robot, equipped with two arms mounted on a rover, can autonomously navigate industrial sites and perform repetitive, physically demanding tasks. Mech aims to address the challenges of efficiency and workplace safety faced by enterprises globally.

Mech combines human-like dexterity with superhuman strength, capable of lifting up to 130 lbs (~58 kg) and stacking boxes as high as eight feet. It integrates seamlessly with existing industrial infrastructure and automation systems. 

Samir Menon, CEO of Dexterity, stated, “Mech represents a major leap forward in our mission to empower people with intelligent, flexible robots that safely and efficiently solve complex, labour-intensive industrial challenges.”

The robot operates autonomously using four steerable wheels and Dexterity’s onboard physical AI supercomputer, as the company mentioned on its official blog

It utilises AI models to tackle tasks such as palletising boxes and handling fragile packages with precision. Enhanced by 16 onboard cameras, Mech can optimise packing strategies even in extreme temperatures ranging from 32°F to 122°F.

One operator can manage up to 10 Mechs simultaneously, reducing injuries caused by repetitive stress and heavy lifting.

Its capabilities can be expanded through software updates, with the initial application focusing on truck loading. Additional apps are planned for release later in 2025.

Dexterity’s innovation marks a significant step in robotics for logistics operations worldwide. The company aims to expand Mech’s functionality to further revolutionise industrial workflows.

The company relies on the approach of physical AI infusing human-like dexterous skills, creating “any robot for any application”.

For industrial robots, Shenzhen-based UBTECH Robotics recently completed what is claimed to be the world’s first multi-humanoid robot collaborative training program at Zeekr’s 5G Intelligent Factory. Using the company’s Walker S1 humanoid robots, it portrayed collaboration across various production zones.

Moreover, in a recent interview with AIM, Satish Shukla, co-founder of industrial automation company Addverb, discussed the company’s plans to launch its humanoid this year.

He believes it will take another three to four years for humanoids to become as prevalent as humans. “For every human, we probably might have one humanoid,” he added.

]]>