AI Features – Analytics India Magazine https://analyticsindiamag.com AIM - News and Insights on AI, GCC, IT, and Tech Fri, 21 Mar 2025 12:26:30 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2025/02/cropped-AIM-Favicon-32x32.png AI Features – Analytics India Magazine https://analyticsindiamag.com 32 32 Now It’s Time for Vibe Debugging https://analyticsindiamag.com/ai-features/now-its-time-for-vibe-debugging/ Fri, 21 Mar 2025 12:26:28 +0000 https://analyticsindiamag.com/?p=10166472 Vibe coding is a thing now. And, so is vibe debugging.]]>

Vibe coding is a term coined by OpenAI co-founder Andrej Karpathy. With it, one focuses on the idea rather than the code and builds something out of it. While vibe coding is popular among coders and non-coders, the phrase ‘Vibe debugging’ is catching up.  

Debugging is Becoming More Important With Vibe Coding

A coder on Reddit shared with AIM that after starting to code with Claude Sonnet 3.5 without having an idea about coding, the person realised that half of the implementations used were not functional for their project and hence fixing those issues became a prime concern.

“In the end, debugging is still necessary, as LLMs will get into, or you’ll hit a wall where they cannot fix a bug,” the Redditor added. “Having a human who knows what they’re doing and can find the source of the issue is still paramount, as LLMs can spin in circles infinitely without any idea that they’re attempting to fix the wrong part of the codebase.”

Meanwhile, Nitin Rai, an AI engineer, told AIM that if one is not a developer, they should be aware of the potential pitfalls, as vibe debugging is 10x more frustrating than regular debugging. “Being dependent on LLM’s Output, we don’t form a mental model of how data flows, how it’s transformed, and where and when something breaks. It’s too late,” Rai said. 

‘Vibe Coding Isn’t Perfect’

Vibe coding has made code accessible to a larger audience, including the ones without any technical knowledge, and empowered them to build various applications and games.

However, Reddit has been exploding with threads citing concerns associated with it. To start with, a Reddit user posted, “Forget Vibe coding. Vibe debugging is the future. Create 20,000 lines in 20 minutes, spend 2 years debugging.”

Among the reactions to the Reddit threads, users have improvised the term with funny takes like “spookghetti code”, and “vibeghetti code”.

In a Reddit thread, a user stated, “Vibe coding is the future unless you need to do vibe maintenance.”

Another user encourages using AI models like Claude as your co-pilot and not your autopilot. One needs to read and understand the code. Otherwise, the vibe check might be a reason for the server meltdown.

In the same thread, the original poster highlighted that vibe coding is risky in a production environment. At the same time, the user mentioned that it is a personal decision, but proper logging and tests may be necessary to keep things in control.

With many people jumping into code with the help of AI, the focus on debugging is crucial as more code goes into production. Also, per a report, the debugging and error detection function segment is also set to grow at 24.2% CAGR by 2030. 

Mohmoud Zareef, GenAI software engineer at TestOne Teknoloji Çözümleri, told AIM that he hates the phrase “vibe coding” or “vibe debugging.” He believes it implies that developers who can code and utilise AI are not true developers, adding unnecessary stigma and making programming appear inferior.

On the same note, Zareef added that some AI-generated coding bugs are simple, while some are pretty complicated. “I find learning how to use AI well makes it much easier to decrease the number of bugs; for example, always ask AI not to oner engineer,” he said. “Reading the documentation or searching online can save hours of wrestling with the AI to fix a bug.”

]]>
Developers Beware! AI Coding Tools May Aid Hackers https://analyticsindiamag.com/ai-features/developers-beware-ai-coding-tools-may-aid-hackers/ Fri, 21 Mar 2025 05:30:00 +0000 https://analyticsindiamag.com/?p=10166422 Security researchers have found that hackers can exploit GitHub Copilot and Cursor coding assistants. ]]>

AI coding is a security mess, and AI coding assistants are already in the crosshairs.

The threat posed by AI coding assistants just got real when security researchers uncovered a new attack vector that enables hackers to weaponise the coding agents using GitHub Copilot and Cursor.

Rules File Backdoor is a New Attack Vector

The security researchers at Pillar Security have uncovered a new supply chain attack vector named “Rules File Backdoor.” The technique, labelled dangerous by researchers, enables hackers to silently compromise AI-generated code by injecting hidden malicious instructions.

The instructions can pose as innocent configuration files used by Cursor and GitHub Copilot.

Instructions are injected into rule files, which are configuration files that guide AI Agent behaviour when generating or modifying code. They shape the coding standards, project architecture, and best practices involved in AI-generated code.

Here is what a rules file looks like from Cursor’s documentation:

Usually, the rule files are available through central repositories with global access and distributed through open-source communities without proper security vetting.

The researchers explained, “By exploiting hidden Unicode characters and sophisticated evasion techniques in the model facing instruction payload, threat actors can manipulate the AI to insert malicious code that bypasses typical code reviews.”

To anyone using the code assistant, the attack is unnoticeable, which allows malicious code to silently propagate through projects, with the potential to affect millions of end users through compromised code.

How Does It Work?

As per the research report, the attackers can exploit the AI’s contextual understanding by embedding carefully crafted prompts through the rule files. When a user starts code generation, the malicious rules tell the AI to produce code with security vulnerabilities or backdoors.

They explained that the attack uses a combination of techniques. It manipulates the context by inserting seemingly innocuous instructions that subtly alter code output, employs Unicode obfuscation to conceal malicious instructions using invisible characters, and hijacks the AI’s semantic understanding with linguistic patterns to generate vulnerable code.

Furthermore, the attack works across different AI coding assistants, indicating widespread weakness across various AI coding platforms.

Testing The Theory With Cursor and GitHub Copilot

Security researchers tested and documented the attack potential. Starting with Cursor, the ‘Rules for AI’ feature allowed them to create a rule file that appeared harmless to human reviewers. The file included invisible Unicode characters disguising malicious instructions.

Next, they used Cursor’s AI Agent mode to create an HTML page, with the prompt, “Create a simple HTML-only page”. The observed output contained a malicious script sourced from an attacker-controlled site.

The researchers noted that the AI assistant never mentioned adding this script, which can propagate through the codebase without any trace in the logs.

The same attack was demonstrated within the GitHub Copilot environment, and similar results were observed.

What Can Hackers Do With It?

Hackers can use the attack vector in different ways. For example, they can override security controls, and malicious instructions can cause the AI to overlook safe defaults, as shown in the demonstration.

Threat actors can generate vulnerable code, such as insecure cryptographic algorithms, implement authentication checks with bypasses, and disable input validation in specific contexts.

Other use cases include data exfiltration using the generated code and long-term persistence, where the vulnerabilities get passed on through someone forking the poisoned project.

How to Stay Safe From These Attacks?

The attack could potentially be implanted through developer forums, communities, open-source contributions, and project templates.

The researchers recommend auditing existing rules, implementing validation processes, deploying detection tools, and reviewing AI-generated code as technical precautions.

AI coding assistants did not take responsibility for the security issues flagged by the researchers and mentioned that the user is responsible for protecting against such attacks.

Researchers believe that AI coding tools have created an environment for a new class of attacks. Hence, organisations must move beyond traditional code review practices.

]]>
‘Nobody Needs to Die of Breast Cancer’  https://analyticsindiamag.com/ai-features/nobody-needs-to-die-of-breast-cancer/ Fri, 21 Mar 2025 04:58:24 +0000 https://analyticsindiamag.com/?p=10166415 Niramai has developed an AI-driven solution which converts thermal images of the chest into cancer health reports.]]>

Breast cancer is one of the most common and life-threatening diseases affecting women worldwide. According to the WHO, in 2022 alone, around 2.3 million women were diagnosed with it, and 6,70,000 lost their lives. Despite medical advancements, breast cancer continues to pose a major health challenge, especially in low-resource regions where access to early detection and treatment is limited. 

Speaking at Rising 2025, Geetha Manjunath, managing director at Niramai Health Analytix, shared how she transitioned from a computer scientist to an entrepreneur after her cousin passed away due to breast cancer.

“One of my very close cousin sisters, a few years younger [to me], was diagnosed with late-stage breast cancer. That was extremely shocking,” she said. This personal experience motivated her to leave her corporate job and establish Niramai eight years ago.

Challenges in Breast Cancer Detection

Manjunath said that breast cancer is a major health concern, with approximately 2,000 deaths occurring worldwide daily. “Nobody needs to die of breast cancer. It is completely curable, but late detection leads to high mortality rates.” Noting that 50% of breast cancer deaths occur in Asia due to late diagnosis, she added, “96% of people go to a hospital only when they notice a lump, which is already a late stage.”

Manjunath revealed that traditional screening methods pose several challenges. Mammograms, which are the standard detection tool, are expensive, require skilled operators, and are recommended only for women above 45 years. “27% of cancer deaths happen under 45, and there is absolutely no test that is objective or standard for detecting breast cancer under 45 years today, anywhere in the world,” she explained.

Introducing Thermalytix

Niramai has developed a novel AI-driven solution called Thermalytix, which converts thermal images of the chest into cancer health reports. “We just measure the temperature variations using a thermal sensor, placed two and a half feet away, without any radiation or touch,” Manjunath described.

This non-invasive, privacy-friendly method uses AI algorithms to detect abnormal temperature patterns. “The AI processes thermal images and marks areas of concern, providing a report within minutes,” she said. Unlike mammograms, this technology works for women of all ages, from 18 to 80, making it widely accessible.

She mentioned that thanks to AI and the innovations associated with it, these screenings can now be provided in hospitals, outreach programmes, and corporate settings.

This is not the first time AI has been used to detect breast cancer. Earlier this year, researchers at the Massachusetts Institute of Technology (MIT) developed a deep learning system called ‘Mirai’ to predict breast cancer risk from mammograms. It gained attention as it can detect breast cancer five years before it develops.

Impact and Adoption

Several hospitals, including HCG, Apollo Clinic, and Narayana Health, have adopted Niramai’s technology. “We have also expanded internationally, with adoption in over 20 countries, including the US, Europe, and parts of Asia,” Manjunath stated. Niramai has received regulatory clearances from India, the European Union, and the United States, ensuring its global applicability.

Privacy and data security are crucial considerations. “We comply with ISO 27001, General Data Protection Regulation (GDPR), and Health Insurance Portability and Accountability Act (HIPAA) regulations to ensure data privacy and security,” she confirmed.

Future Prospects

Looking ahead, Niramai plans to extend its technology beyond breast cancer detection. “Why can’t we use the same technology for other abnormalities? Some doctors have already asked us to explore this,” she concluded.

]]>
AI Search Will Define the Next Generation of Business—Here’s Why https://analyticsindiamag.com/ai-features/ai-search-will-define-the-next-generation-of-business-heres-why/ Thu, 20 Mar 2025 12:42:39 +0000 https://analyticsindiamag.com/?p=10166403 The AI Business Trends 2025 report by Google sheds light on how AI has changed the way the world discovers information and the benefits of enterprise search. ]]>

Whether searching for a resource on Google or looking for a particular favourite food from within an app, the presence of AI-powered searches is perceived nearly everywhere. From specialised AI search engines to advanced platforms designed to replace conventional search engines, the way information is discovered is being reshaped.

So, what about AI-powered searches geared towards enterprises? 

The AI Business Trends 2025 report by Google sheds light on how AI has changed how the world discovers information and the benefits of enterprise search.

Enterprise Search Market to Experience a Surge in Growth

The enterprise search market size is set to reach $12.9 billion by 2031. 

As per the Google report, the advanced AI-powered search capabilities now let users seek information in a way that mirrors how they naturally experience the world.

The AI-driven search tech includes site search, product search, and customer support self-service search. This is helping organisations enrich and optimise product data catalogues, save significant manual work, and improve conversion and cross-selling efficiency.

Prominent companies are already adopting AI-based search capabilities.

“Snap (Snapchat) deployed the multimodal capability of Gemini within their ‘My Al’ chatbot and has since seen over 2.5 times as much engagement within Snapping to My Al in the United States,” the report stated.

Not just limited to tech companies, hospitals like the Mayo Clinic have also benefited from such capabilities and have given thousands of its scientific researchers access to 50 petabytes worth of clinical data through Vertex AI search, facilitating information access across multiple languages.

Benefits of AI-Powered Search for Enterprise

Advanced search tools provide immense value to businesses. The report highlights three separate benefits: faster access to data, advanced and intuitive searches, and deeper AI-powered insights.

Regarding data access, enterprise search can help employees quickly and efficiently find and utilise internal data, boosting productivity. This should help them with more informed decision-making.

When it comes to intuitivity, users and employees can use complex queries and process various data formats (documents, spreadsheets, and multimedia) to get relevant information. One can replace multiple tools with the help of AI-powered searches.

The report highlighted that integrating AI agents with enterprise search will elevate knowledge retrieval significantly. These agents are capable of accessing and analysing company data, executing complex tasks, and providing valuable recommendations.

Meanwhile, Aashima Gupta, global director of healthcare strategy and solutions at Google Cloud, said, “We expect to see greater adoption of intuitive, contextual search that understands medical terminology, complex vocabulary, and abbreviations—helping relieve administrative burdens for medical professionals while improving patient education and research.”

Furthermore, Zac Maufe, managing director of regulated industries at Google Cloud, said, “We expect to see more financial institutions prioritising robust internal knowledge search for their employees, tailored to their specific roles. For example, a loan officer would receive different results than a risk analyst when searching for information about a particular loan application.”

“We expect GenAl will continue to transform search in retail, allowing customers to find products using natural language, images, or voice commands to deliver higher quality search results,” said Paul Tepfenhart, director of global retail strategy and solutions at Google Cloud.

Hence, it appears that the AI-powered search will impact industries, including finance, retail, and healthcare and life sciences.

The benefits of AI-powered search also extend beyond the enterprise. Companies that adopt these tools deliver new levels of service and support to their customers. 

For instance, Moody’s Corporation uses LLMs from Google Cloud to help employees sift through public documents and the firm’s database to write analyses. This not only improves employee efficiency but also enhances the quality of service provided to Moody’s clients.

Evolution of Search and the Path Forward

Building a robust search system is a complex task, whether it is for Google or any other company. 

The report states that before generative AI, enterprise search systems were keyword-based and often delivered irrelevant results, leading to frustrating user experiences. Going forward, businesses can integrate LLMs into their legacy systems to improve search accuracy and relevance.  

While building AI-powered search systems can be challenging, companies like Google are trying to make it easier. These solutions remove the complexity from search systems, making it easier for companies to implement and benefit from AI-powered search.  

AI-powered search is revolutionising how businesses operate and interact with their customers. By making knowledge discovery faster, more intuitive, and more relevant, AI transforms enterprise search into a powerful tool for innovation, growth, and enhanced customer service. As AI technology continues to evolve, we can expect more drastic changes.

]]>
Narayana Health Proves You Don’t Need Excel to Build a Data-Led Masterpiece https://analyticsindiamag.com/ai-features/how-narayana-health-built-a-data-led-strategy-without-excel-sheets/ Thu, 20 Mar 2025 09:30:00 +0000 https://analyticsindiamag.com/?p=10166373 “After 20 years of my career, I’m taking over as CFO of a listed company with no Excel sheets. I thought they were setting me up to fail,” Sandhya Sriram said.]]>

When Sandhya Sriram took over as the group chief financial officer at Narayana Health, she expected to rely on the same Excel sheets that had been her lifeline throughout her career in finance. However, there were none. Instead, everything was integrated into a data tool called Medha. 

“After 20 years of my career, I’m taking over as CFO of a listed company with no Excel sheets. I thought they were setting me up to fail,” Sriram explained while talking about the data-led strategy at Narayana Health at The Rising 2025

She revealed what she didn’t realise at the time was that Narayana Health was already ahead of its time—operating in a future where data-driven decisions had replaced the tedious manual processes of the past.

Letting Go of the Old Ways

For finance professionals, control often comes from managing numbers in Excel and PowerPoint. At Narayana Health, however, financial reviews weren’t prepared over weeks of back-and-forth data collection. Instead, everything was dynamically available on Medha. “No PPTs, no firefighting, just clean data in real-time,” Sriram said.

This transformation wasn’t just about convenience—it was about precision. The platform enabled real-time tracking of revenue, cost efficiencies, and operational bottlenecks. “Healthcare revenue isn’t something you can manipulate—it depends on patient inflow. But costs and quality? Those we can control,” she explained.

Narayana Health was founded by Dr Devi Shetty with a mission to make healthcare more affordable. One of the most impressive ways they’ve achieved this is by using AI and predictive analytics to manage inventory. 

“In FMCG, inventory write-offs are common. But when I joined Narayana, I found that we hadn’t taken a single inventory write-off for three years,” she said. This was because Medha could predict which pharma inventory was about to expire and ensure it was used in time.

Predictive AI also played a crucial role in operational efficiency. For example, Medha optimised the use of operating rooms by analysing patterns and suggesting scheduling improvements. “Every digital intervention has a cost—not just in money, but also in change management. People resist new systems, so we had to ensure that every investment in tech delivered real returns.”

Medha Does it All

Beyond operations, Medha has transformed Narayana Health’s marketing efforts. “Marketing used to be a black box—you spent money and hoped for the best. Today, digital tools allow us to track ROI for every rupee spent,” Sriram explained. Finance teams that once struggled to quantify the impact of marketing spend can now measure it with data-backed key performance indicators (KPIs).

Moreover, budgeting has also evolved. Instead of pulling data from different sources manually, Medha runs detailed financial scenarios at every level. “As a CFO, I want to know: what if revenue dips by 5%? What if a cost centre overruns? Medha lets me test these scenarios instantly,” Sriram explained. 

The ability to simulate financial outcomes in real time gives leadership a significant advantage in planning and risk management.

Building a Data-Centric Culture

For any data initiative to succeed, leadership buy-in is crucial. Narayana Health ensured that business leaders—not just the tech team—owned the digital transformation. “Every rupee invested in data analytics had to deliver a tangible impact, whether in cost reduction, revenue growth, or operational efficiency,” she emphasised.

However, some challenges remained. “Initially, every operational committee review had someone questioning data accuracy. But we made a rule—no external data sources would be entertained. The numbers had to come from Medha, even if there were issues. We had to trust the system for it to work,” she said.

Narayana Health operates with a revenue per patient significantly lower than competitors, yet its profitability remains strong. “We don’t focus on extracting maximum revenue from patients. We focus on running an efficient operation. Data is our key enabler,” she said. 

The approach has been recognised globally, including a Harvard Business School case study and a Netflix feature on Narayana Health’s model.

At the core of this transformation is Medha, Narayana Health’s in-house data team. “They aren’t just building dashboards; they’re driving impact. Every new dashboard must show measurable value, whether in cost savings, revenue improvements, or productivity gains,” Sriram concluded.

By eliminating reliance on Excel sheets, shifting financial decision-making to real-time analytics, and embedding a data-driven culture, Narayana Health has set a new standard for how businesses—especially in healthcare—can leverage technology to drive efficiency and affordability. 

Moreover, for CFOs like Sriram, it’s proof that, sometimes, letting go of old ways is the only way forward.

]]>
Are Adobe’s AI Agents the Final Step to Fully Automated Customer Service? https://analyticsindiamag.com/ai-features/is-adobes-ai-agent-the-final-step-toward-fully-automated-customer-service/ Wed, 19 Mar 2025 10:37:20 +0000 https://analyticsindiamag.com/?p=10166324 In an exclusive conversation with AIM, Klaasjan Tukker, director of product management at Adobe Experience Platform, highlighted how Adobe’s AI-powered solutions are transforming digital experiences.]]>

A recent NVIDIA survey on AI adoption in financial services found that 60% of companies are now exploring generative AI and large language models (LLMs) to elevate customer engagement. With companies looking to integrate real-time, context-aware solutions, AI is shifting from reactive to proactive customer interaction.

But AI in customer service is evolving beyond simple automation. The next step is agentic AI, semi-autonomous systems capable of perceiving, reasoning, and acting on complex problems without human intervention.

AI-Powered Customer Service 

At the ongoing Adobe Summit 2025, the company launched the Adobe Experience Platform Agent Orchestrator, introducing a new capability for businesses to integrate AI agents into customer experiences and marketing workflows.

In an exclusive conversation with AIM, Klaasjan Tukker, director of product management at Adobe Experience Platform, highlighted how the company’s AI-powered solutions are transforming digital experiences. 

“A decade ago, digital marketing was structured around individual channels. Teams worked in silos—one managed website analytics, another focused on email campaigns, and a third handled audience targeting with third-party cookies. Customers had fragmented experiences because their interactions across channels weren’t unified.”

Recognising this challenge, Adobe built the Adobe Experience Platform to unify customer data and enable real-time engagement. 

“Instead of relying on batch-driven, brand-initiated messaging, AEP allows businesses to interact dynamically—delivering ‘in-the-moment’ experiences that are personalised and timely,” he said. 

Let’s consider a cricket match, for example. When should a brand notify a fan about new merchandise? The day before via email? Right after the match? Or the moment they scan their ticket at the stadium? According to Tukker, the last option is the most effective because real-time, context-aware engagement creates higher impact and conversions.

AI Helps Build Deeper Customer Relationships

The evolution of customer data platforms (CDPs) reflects this shift towards AI-driven personalisation. Tukker outlined three waves in the development of CDPs. 

The first wave focused on unifying data. Initially, enterprises struggled with disconnected customer information spread across multiple touchpoints. CDPs emerged to create a single, comprehensive customer profile by integrating data from various sources.

The second wave was about democratising audience insights. Simply having centralised data wasn’t enough; marketing and customer service teams needed real-time access to insights. This phase enabled teams to segment audiences efficiently without relying on IT departments, making data more accessible and actionable.

The current wave is centred on intelligent activation. Today’s AI-powered CDPs go beyond data collection and segmentation to enable real-time engagement. 

Adobe’s platform listens to consumer signals at scale, determines the next best action instantly, and delivers personalised experiences at the right moments, ensuring brands engage with customers dynamically and effectively.

“During the Super Bowl, millions of people stream the game at once. Brands can’t wait hours to engage them. Adobe’s AI ensures personalised customer interactions happen instantly, at scale,” Tukker explained.

Adobe vs Salesforce vs Microsoft

While Adobe, Salesforce, and Microsoft dominate the customer experience (CX) space, Adobe stands out by blending data-driven insights with creative storytelling.

“Salesforce and Microsoft focus heavily on data. Adobe goes a step further by combining data insights with compelling content to create emotionally resonant customer experiences,” Tukker said.

According to him, Adobe’s strength lies in its ecosystem, which includes content creation at scale (Adobe Creative Cloud), customer understanding through AI-driven insights, and personalised engagement in real time.

Moreover, Adobe Experience Cloud integrates seamlessly with Microsoft Azure and Amazon Web Services (AWS), giving enterprises flexibility in how they deploy their AI-powered customer service tools.

At the summit, Adobe announced a collaboration with AWS to build new offerings that empower marketing and creative teams to deliver customer experiences with greater speed, precision, and scale. The partnership combines Adobe’s expertise in customer experience orchestration with AWS’s advanced cloud services. 

What’s Next? 

According to the Zendesk Customer Experience Trends Report 2024, over two-thirds of CX leaders believe AI can enhance human-like customer service interactions, fostering long-term loyalty.

Some companies are already leading the charge. For example, NatWest Group and IBM developed Cora+, an AI-driven multichannel assistant that delivers personalised service using watsonx Assistant.

[24]7.ai is embracing generative AI to redefine customer service training, equipping call centre agents with AI-powered learning methodologies.

For Squadstack, AI helps streamline tasks like quality auditing, which was once done manually by agents who listened to calls and graded them—a monotonous and error-prone task. Now, AI consistently audits thousands of interactions simultaneously without fatigue. 

Similarly, AI manages simpler conversations through chatbots, providing instant responses and reducing wait times. 

[With inputs from Anjali Nair, senior social media specialist at AIM.]

]]>
Kunal Bahl Suggests Swades-Style Ghar Wapsi for Indian Startups https://analyticsindiamag.com/ai-features/kunal-bahl-suggests-swades-style-ghar-wapsi-for-indian-startups/ Wed, 19 Mar 2025 08:38:24 +0000 https://analyticsindiamag.com/?p=10166309 In 2025, over 90 companies have filed their draft prospectuses in India, aiming to raise an estimated INR 1 trillion or $11.65 billion.]]>

In a trend that’s become increasingly popular over the years, many Indian founders register their startups in the West or Singapore. Why, you may ask? The benefits are aplenty. For starters, it makes fundraising easier by allowing them to find more institutional and technological investors.

However, not everyone is impressed by the idea of foreign-born startups of Indian founders. 

Snapdeal co-founder Kunal Bahl, who recently shared a detailed thread on X, argues that Indian startups should be incorporated domestically rather than overseas. His stance challenges what was once conventional wisdom among Indian entrepreneurs who believed foreign incorporation provided fundraising advantages, including easier exits.

“In 2025, that’s outdated. Today, incorporating in India isn’t just patriotic—it’s pragmatic,” Bahl said. “Indian investors now back all kinds of startups, including deep tech and AI, and one doesn’t need to find investors overseas to fund these spaces,” he explained.

Bahl argued that recent government initiatives have brought in improved taxes and regulatory clarity for Indian startups. In the 2024 Union Budget, finance minister Nirmala Sitharaman announced the abolition of angel tax (effective FY2025-26) for all investor classes, addressing a long-standing concern in the startup ecosystem.

The tax, which previously treated investments above fair market valuation as income, had been a significant deterrent for domestic incorporation. 

Additionally, the finance minister extended the definition of “eligible startup” under the Startup India scheme to include entities incorporated between April 1, 2016, and March 31, 2025. It allows more startups to benefit from the tax holiday offered under the scheme, directly supporting Bahl’s point about India’s startup-friendly tax policies.

Are Startups Really Coming Back?

In 2025, over 90 companies have filed their draft prospectuses in India, aiming to raise an estimated INR 1 trillion or $11.65 billion. About 34 companies have either already raised or are announcing their fundraising efforts.

Citing the success of recent IPOs like Zomato, Nykaa, and Unicommerce, Bahl said that the Indian market is ready. “If you plan to list in India, incorporating here avoids costly ‘flipping’ later,” he said. 

Quick-commerce platform Zepto announced its ‘ghar wapsi’ in January 2025, moving from Singapore to India. This aligned with the company’s plans to launch its IPO, making KiranaKart, its domicile holding in India, the parent company. However, they are expected to attract a huge tax bill for this ‘flip’.

Meanwhile, Groww, a financial trading platform, shifted its domicile status from the US to India last year under its parent company, Billionbrains Garage Ventures.

Walmart-owned Flipkart is also reportedly planning to move its domicile from Singapore to India for the same reasons. According to a report by The Economic Times, Meesho is reverse flipping from Delaware in the US back to India, while Pine Labs is also moving back from Singapore. PhonePe is also moving back to India and is eyeing an IPO.

In a previous interaction with AIM, Akash Aggarwal, MD (investment banking) at Motilal Oswal Financial Services, reflected upon the same. 

Having participated in several IPOs in the past, Aggarwal said that despite the claims of the market being down, IPOs are seeing decent subscriptions among Indian investors. “A majority of the money comes from Indian investors and not foreign investors,” Aggarwal told AIM

He added that as of January 21, 2025, the BSE index had fallen by almost 10% compared to September 2024, when the market surpassed the 84,000 mark, its highest ever.

“Almost 70-80% of the interest is from domestic investors. Of the several companies that I am in touch with, some are going IPO, and I think this is the right time for it because it might take at least 9-12 months for them to launch the deal,” Aggarwal said, adding that if a company is mature enough, it should think about it.

In an interview with BusinessLine in January, Mahavir Lunawat, chairman of the Association of Investment Bankers of India, said that Indian firms used to take pride in raising funds abroad, but now foreign firms line up to raise funds in India.

The Department for Promotion of Industry and Internal Trade (DPIIT) recognition system provides substantial benefits to domestically incorporated startups which includes tax exemptions and a simplified compliance process, as highlighted by Bahl in his thread. 

The Problem with Ghar Wapsi

Vaibhav Dusad, co-founder of SurgeGrowth, while speaking about Bahl’s analysis on LinkedIn, said that it completely misses the ground reality. “Sentiment-wise, I’m all in—build in India, win from India, keep the tax rupees here,” he said. “But as a SaaS founder who’s actually done it, I can tell you it isn’t as shiny as the pitch.”

Dusad said that India’s IPO market is a legit exit path since $5 million in revenue can get startups there, which is much less compared to the half a billion that startups in the US look for. “In the US, it seems too unrealistic to do an IPO—even biggies like Stripe haven’t done it yet,” Dusad added.

He highlighted problems like the KYC process when wiring money from the US and the poor infrastructure. “In India, you’re not just fighting for customers—you’re wrestling with bureaucracy and red tape. Ultimately, your focus is diverted from what matters most: building your product!”

However, the continued emergence of Indian unicorns supports Bahl’s perspective. In February 2025, the banking technology platform Zeta became India’s newest unicorn, reaching a $2 billion valuation with its latest funding round. 

According to data from AIM Research’s AI Startup Funding Report India 2025, early-stage funding experienced a substantial reduction of 37% compared to last year. It indicates that investors are looking for startups that have significant market viability. However, this is just for AI startups in India.

According to Tracxn’s Geo Annual Report – India Tech–2024, a total of $11.31 billion were raised that year, which is an increase of 3.68% compared to $10.91 billion raised in 2023, but a significant drop of 56.17% compared to $25.80 billion raised in 2022.

These market conditions raise questions about Bahl’s first point that “nearly all VCs are funding Indian entities”. While Indian investors may indeed be willing to back various types of startups including deep-tech and AI ventures, the overall funding environment appears constrained.

Six startups — Cashfree, Zeta, ToneTag, SpotDraft, Udaan, and Geniemode — raised more than $50 million in 2025 alone. This demonstrates that significant funding remains available for promising Indian-incorporated startups. 

Meanwhile, in the second half of 2025 and 2026, at least 60 companies are expected to seek exits through IPOs, mergers, or acquisitions. 

As Dusad put it: “Incorporate in India if you’re playing the long game—IPOs & local pride are worth it. But if you’re early-stage, chasing PMF, and need to move fast, the US is still king.”

]]>
Why Apollo.io Switched from GitHub Copilot to Cursor https://analyticsindiamag.com/ai-features/why-apollo-io-switched-from-github-copilot-to-cursor/ Tue, 18 Mar 2025 13:30:00 +0000 https://analyticsindiamag.com/?p=10166253 Cursor, an integrated development environment (IDE) designed to be “AI native,” has been making quite a noise since its launch in January 2023]]>

Built on top of Microsoft’s Visual Studio Code, Cursor set out with a clear ambition: to go beyond existing AI coding tools like GitHub Copilot, which had already gained popularity since its official launch in 2022. 

California-based go-to-market (GTM) platform Apollo.io swiftly moved its engineering team from GitHub Copilot to Cursor, which seemed to have “gotten it right”

In an interview with AIM, Himanshu Gahlot, VP of engineering at Apollo.io, and Saravana Kumar, head of machine learning at Apollo.io, discussed the reason behind their engineering team’s switch from GitHub Copilot to Cursor. 

California-based go-to-market (GTM) platform Apollo.io is a B2B sales platform powered by AI. It is designed to empower revenue teams with cutting-edge sales intelligence and engagement tools.

“We started using GitHub Copilot early last year when it had just launched. We noticed it and quickly started using it,” Gahlot shared. “Slowly, we realised that there are better tools out there, or at least the ones we could use even more effectively within our company.”

Is Cursor Really So Special?

Explaining in simpler terms, Gahlot said that Cursor uses a newer approach in AI in which you can ask it to do several things at once, and it takes care of them in one go.

The team ran a pilot program using Cursor with their engineers, and the response was overwhelmingly positive. “We got a 90% plus satisfaction rate. Almost every engineer said positive things about being able to understand the whole code base and generate the right things,” Gahlot added.

But while the tool showed promise, Gahlot warned that there is a lot of hype around AI tools that don’t always match reality.

“It did come with a caveat. You’d often hear people hyping these tools, claiming productivity gains of 25x or even 50x — but it’s very nuanced,” he said.

Gahlot added that these tools work really well if starting from scratch—what he calls “0 to 1” use cases. They can be incredibly helpful when building something new or just putting together a prototype or demo.

But things get tricky when dealing with large, complex code bases that have been developed over many years. “When it comes to a 10-year-old code base with millions of lines of code and like 30-40,000 individual files, then that is not how you would use it,” Gahlot said.

In such cases, Gahlot said, teams need time to figure out the right way to use the tool, and engineers need proper training.

Adding to this, Kumar pointed out that people tend to either overestimate or underestimate the capability of AI tools.

“I would say that is not even an important aspect,” he said, referring to converting natural language into code. “The important aspect is to understand what it can and can’t do.

He explained that if somebody is able to clearly explain what they want, including all the details, assumptions, and context, turning that into a working code is mostly handled by AI tools now.

“What is actually not done [by them] is figuring out how we solve the problems. That’s where humans come in,” Kumar said. 

Reason Behind the Shift

Gahlot said they moved from GitHub Copilot to Cursor because, though the former was doing pretty well in terms of auto-completion and small code generation, their engineers were not finding much success. 

“There wasn’t an “aha!” moment there. It wasn’t like, you know, you want to get something done, and it would just do it for you.”

However, when the team tried out Windsurf and Cursor, they found their “aha!” moment. “It is like, you can chat with your code, write anything you want done, especially in a zero-to-one use case, and it just does it for you, rather than completing part of your code or suggesting a few things,” he explained.

Apollo.io began adopting Cursor more widely and started seeing higher satisfaction among engineers using the IDE. However, as he pointed out, it’s not a one-size-fits-all solution. 

He explained that different roles within engineering teams have different needs. The same applies to machine learning engineers, back-end developers, and front-end teams. 

Other Big-Tech Collaborations

Apollo.io has now onboarded three major AI providers, OpenAI, Anthropic, and Google, and it is constantly experimenting with their new models. Gahlot believes the future will see businesses relying on multiple AI models for different tasks.  

Gahlot also spoke about the company’s early collaboration with Anthropic. “We have been early partners with Anthropic on multiple things, specifically on the model context protocol (MCP) initiative that they recently launched,” he said. 

“We were one of the first companies they launched MCP with. I think initially there were about 10 startups, and we were one of them.”

Currently, out of the 700 employees at Apollo.io, about 200 are spread across engineering, product, recruiting, sales, and support in India. Out of those, about 160 are in engineering and 40 in other departments, constituting 65% of its engineering team in India.

]]>
Wi-Fi Troubles are About to be a Thing of Past, Thanks to AI https://analyticsindiamag.com/ai-features/wi-fi-troubles-are-about-to-be-a-thing-of-past-thanks-to-ai/ Tue, 18 Mar 2025 12:30:00 +0000 https://analyticsindiamag.com/?p=10166245 Meesho used AI to improve call volume and retention rates and built a service-centric network that fuels business growth. ]]>

Imagine walking into your office on a Monday morning. You grab a coffee, settle into your chair, and open your laptop, only to find the Wi-Fi not working, your CRM tool struggling to load, and video calls dropping in quality. Frustration sets in. A network engineer is alerted, diving into logs and running tests, but the issue remains a mystery for hours. This scenario is common, but now, AI-driven network operations are solving this problem.

Bengaluru-based Juniper Networks, which has a team of 4,000 employees, handles the complete product lifecycle, from concept to development, testing, and deployment.

In an exclusive conversation with AIM, Sajan Paul, area vice president and country manager of India and SAARC at Juniper Networks, said, “Seven years ago, before AI had even become mainstream, we saw its potential in automation. We knew that managing massive networks would soon be beyond human capability alone.”

Paul, a key leader in this AI-driven shift, likens the journey to teaching a child. “You train a child, and in an appropriate environment, they should behave as taught. AI is no different. We’ve built large models and refined them through seven generations, and today, we achieve over 90% efficacy in our AI-powered network solutions,” he explained.

He further mentioned that their AI systems don’t just process data; they turn decades of network expertise into real-time, actionable insights. A problem that was solved 10 years ago should never need to be solved again. AI ensures that by contextualising past solutions and proactively addressing similar issues before they occur.

AI in Networking

AI’s role in networking has evolved through three key stages. The first is the recommendation phase, where AI suggests solutions, but human engineers still make the final call. Next comes the automation phase, where AI takes over routine tasks, significantly reducing manual interventions. 

Finally, the ultimate goal is self-driving networks, where AI detects, analyses, and resolves issues before they impact users. 

Paul noted that they are in the seventh generation of AI maturity, with intelligent automation spanning our entire product portfolio. Global companies, from ServiceNow to some of the world’s largest IT, healthcare, and manufacturing firms, rely on Juniper Networks to power their operations because, in today’s hyper-connected world, downtime is simply not an option.

The Meesho Story

AI isn’t just about individual user experiences; it plays a crucial role in ensuring businesses stay up and running without disruption. 

For instance, Meesho, one of India’s leading e-commerce platforms, has a vast network of outsourced customer service agents handling queries in seven languages. The company needed a high-performing, AI-driven Software-Defined Wide Area Network (SD-WAN) solution to support its Voice over Internet Protocol (VoIP) applications, especially during peak call times. Their business depended on it. 

Thanks to AI, Meesho not only improved call volume and retention rates but also built a service-centric network that fuels business growth. 

Ismail Mohideen, director of IT at Meesho, mentioned that Juniper’s AI-driven SD-WAN solutions have not only increased call volume and retention rates but also built a service-centric network that fuels business growth. This partnership brings them closer to democratising internet commerce while improving customer satisfaction. 

AI-powered automation ensured that their customer service centres stayed online, call quality remained flawless, and agents could assist sellers and buyers without interruptions, no matter the time of day.

What’s Next?

One of the biggest challenges in network troubleshooting is catching problems before they escalate. Traditionally, engineers would wait for an issue, scramble to capture logs, and analyse data after the failure, often missing the critical moment. AI changes the game entirely. It continuously monitors network health, detects anomalies instantly, auto-captures logs before a failure happens, and sends real-time data to engineers for immediate action. 

The results speak for themselves: a 90% reduction in trouble tickets, solutions deployed were nine times faster, and an 85% decrease in operational expenses. Every network device Juniper deploys now comes embedded with an AI agent, functioning like the Siri or Alexa of network management, working quietly in the background to ensure near-zero downtime. 

So, AI-driven networks are no longer just a vision; they are already here. 

]]>
Ather Bets Big on AI and Salesforce to Transform EV Sales https://analyticsindiamag.com/ai-features/ather-bets-big-on-ai-and-salesforce-to-transform-ev-sales/ Mon, 17 Mar 2025 13:28:53 +0000 https://analyticsindiamag.com/?p=10166170 “This is about unifying CRM, lead management, and DMS (dealer management system) into one scalable platform, enhancing the experience for both dealers and customers,” said Mankiran Chowhan, vice president of Salesforce India. ]]>

One of India’s leading electric two-wheeler manufacturers, Ather Energy, has taken a tech-first approach to customer engagement and dealer management. In partnership with Salesforce, Ather has developed a unified, AI-ready digital platform to streamline its sales, service, and customer experience operations. 

The partnership was first announced two years ago. With the integration of Salesforce’s cloud-based solutions, Ather now has a single, scalable system that connects dealers, service centers, and customers, ensuring real-time access to critical data. 

Unified Saas-EV Ecosystem

“We are a growing company in a growing industry, and things are changing rapidly—not just from a consumer point of view but also from a regulatory perspective. At the core of our business strategy is consumer experience, and that includes our dealers,” Ravneet S. Phokela, chief business officer, Ather Energy,​ said in an exclusive interview with AIM. He was speaking on the sidelines of Ather-Salesforce event in Bengaluru. 

India’s EV sector is evolving rapidly, with shifting regulations, growing competition, and changing consumer behaviour. To navigate these challenges, Ather recognised the need for a centralised platform that brings together lead management, customer interactions, and dealership operations.

Phokela mentioned that regarding integrating AI or other emerging technologies, it is essential to be positioned to embrace the innovations that arise within the ecosystem. 

Previously, Ather’s dealership and customer management relied on multiple tools, making it challenging to track test drives, pre-orders, and post-sales service seamlessly. With Salesforce, Ather has consolidated these functions into a unified platform, allowing for real-time coordination across all stakeholders. “You can’t be a consumer-focused company if you don’t have a unified view of the customer,” he said. 

Highlighting the significance of this collaboration, Mankiran Chowhan, vice president of Salesforce India, emphasised how it enhances Ather’s scalability and operational efficiency. 

“This is about unifying CRM, lead management, and DMS (dealer management system) into one scalable platform, enhancing the experience for both dealers and customers,” said Chowhan to AIM

AI-Driven Insights Continue

Beyond just data centralisation, the system is built to integrate AI and automation, ensuring smarter decision-making and predictive analytics. Chowhan highlighted Salesforce’s AI engine, Einstein AI, that processes over 2 trillion predictions daily, enabling businesses to personalise customer interactions and optimise operations.

“This is really about how human agents and digital agents are all working together seamlessly in maximizing productivity,” said Chowhan. 

While Ather is still in the early stages of AI adoption, its new digital backbone ensures future compatibility with AI-driven tools. This means the company can eventually implement predictive maintenance alerts, automated service scheduling, and real-time sales forecasting, all powered by AI.

Phokela emphasised the importance of seamless AI integration, stating, “Technology should be invisible. A sales guy should sell. The system should work in the background, making their job easier.”​

Chowhan highlighted how trust and transparency are key concerns in AI adoption, stating, “The world of AI is evolving rapidly, and trust and privacy are becoming critical. Most organisations are now looking at how AI ties back to their business goals.”​

For Ather, this means ensuring that AI is embedded responsibly into its digital framework, allowing for scalability without compromising data security. The partnership also ensures that as new regulatory requirements emerge, Ather’s systems can adapt and evolve without major overhauls.

Overcoming Transition Challenges

Transitioning to a unified, AI-enabled digital platform was not without its challenges. Phokela highlighted that resistance to change was one of the biggest obstacles. “If it ain’t broken, why fix it? We already had a system, and it was working fine. The decision to proactively change something was a reasonably bold step,” he stated, emphasising the risks associated with data migration and system overhaul​. 

From a technical standpoint, developing a scalable and future-ready architecture requires careful planning, as Ather explained. The system needed to be flexible and adaptable to meet the evolving needs of Ather’s dealerships and customers. The complexity of dealer-managed systems also necessitated a solid data model with 40-50 data entities, ensuring that future business pivots would not be hindered by technological limitations.

On future collaborations, Chowhan expressed optimism about empowering more businesses. 

“Agent force is what we call, is really the heart and center of what Salesforce is focusing on,” she said. She added that there is a global workforce shortage while it is in surplus in India, and yet many are not employable. “So our focus right now is making sure that we can bridge that gap and get AI to help enable that next wave of growth as India focuses on being the third largest economy and really empowering that next wave,” Chowhan said. 

]]>
Did ByteDance Just Create a React Native Killer? https://analyticsindiamag.com/ai-features/did-bytedance-just-create-a-react-native-killer/ Mon, 17 Mar 2025 09:30:00 +0000 https://analyticsindiamag.com/?p=10166140 TikTok’s parent, ByteDance, transitions to the developer side of things.]]>

ByteDance has been making headlines for developing AI models and taking on a carefully crafted image of a developer-focused platform. In line with this, the parent company of TikTok has now introduced an open-source Rust-based JavaScript framework called Lynx. 

Lynx helps build cross-platform mobile and web applications. ByteDance seems to have been using Lynx internally for apps like TikTok and has now open-sourced it. The internet is abuzz with discussions on Lynx, which is seen as an alternative to Meta’s React Native, its potential, and how it tries to overcome the problems of the existing frameworks.

So, What is Lynx?

Xuan Huang, the architect of Lynx, calls it a family of technologies that empower developers to use their existing web skills to create native UIs for mobile and web from a single codebase. The core engine of the Lynx framework is framework-agnostic, as well as agnostic to host platforms and rendering backends.

In an official blog post, he explained that Lynx is designed for diverse use cases and rich interactivity, which enables it to provide engaging UIs for large-scale apps like TikTok.

Huang mentioned that Lynx powers TikTok Studio, e-commerce storefronts like Shop, and high-profile events such as Disney100 and The Met Gala on TikTok. In a nutshell, TikTok has extensively used it.

What Does Lynx Aim to Solve?

Lynx aims to provide a platform for developers to ship their apps with a great native experience while eliminating lag. “A blank screen, a 0.1s lag in a “like” animation, or an unfamiliar UI pattern can make an interface feel “cheap” or untrustworthy. We believe that native primitives and responsiveness aren’t just nice-to-haves—native is a necessity,” Huang wrote.

He emphasises that even with the growing app economy, developers still face challenges in delivering experiences at scale and velocity. Lynx tries to solve this by enabling developers to build once and reach more platforms.

A notable architectural decision of Lynx is its statically enforced division of user scripting into two distinct runtimes: a main-thread runtime, powered by PrimJS, a custom JavaScript engine optimised for Lynx, and a background runtime for user code as the default. This enables it to render frames fast and have responsive interfaces.

Is It a Better Alternative to React Native?

Experts seem to be impressed by Lynx.

Rajesh Sahoo, senior software engineer at Tikkl, told AIM, “Lynx is still in its early days, whereas React Native is currently more stable and boasts a massive community. However, React Native has some limitations, such as relying heavily on community-maintained libraries for native functionality on Android and iOS.”

He added that Lynx could be a game-changer for its easier access to native functionality and claims about its performance optimisation.

“At the end of the day, I don’t care whether it’s Lynx, React Native, or another platform. I just hope whoever can tackle these issues, does it in a way that makes my life as a developer easier. I’ll be happy to use whatever platform gets me there,” Sahoo expressed. 

Theo Browne, YouTuber and founder of T3 Chat, elaborated on the problem with React Native on his YouTube channel. He explained that the way React Native works is not the most efficient, and a user may experience a stutter with the UI updates in an app.

While React Native has tried to improve over the years, Lynx has introduced a fundamentally different approach with two threads—the UI thread, and the framework thread. This enables heavy data processing while updating at 60 FPS with no lag.

“I see so much potential in what they’ve built here,” Browne added.

Zack Jackson, infrastructure architect at ByteDance, highlighted the difference on X: “Lynx: Emphasises a multithreaded engine designed to achieve “instant launch and silky UI responsiveness. React Native: Relies on a single-threaded JavaScript bridge to communicate between JavaScript and native code, which can become a bottleneck for performance-intensive apps.”

He added, “While React Native has made strides with its new architecture (e.g., Fabric renderer), it doesn’t inherently emphasise multithreading in the same way Lynx does.”

A thread on React Native’s Subreddit highlights reactions from developers. One user said, “So it’s using react renderer but it renders to different base UI than React Native. An alternative renderer essentially, with common bits being react but more importantly allowing you to use something other than react to make native apps (sic).”

The developer added, “Competition and innovation is cool!”.

Another developer wrote, “This seems to finally answer the “write once, run everywhere, react based & dom rendering native platform”.

While the general sentiment seems positive, considering it powers an app like TikTok, it is still in the early days for the framework to overshadow React Native. However, a developer wrote on X, “‘I’m excited for Lynx and I think React really needs a kick in the pants.”

]]>
The AI Ad Strategy That Works in 100 Milliseconds https://analyticsindiamag.com/ai-features/the-ai-ad-strategy-that-works-in-100-milliseconds/ Mon, 17 Mar 2025 07:31:58 +0000 https://analyticsindiamag.com/?p=10166127 Platforms now use AI to deliver personalised ad experiences by analysing user behaviour, preferences, and demographics. ]]>

Yanadoo, a self-development and education platform, has built a strong community of 1.68 million learners in South Korea. However, with 50 million internet users across the country, the company recognised an even bigger opportunity. Determined to expand its reach while keeping existing users engaged, Yanadoo set its sights on scaling its impact even further.

Yanadoo partnered with Criteo to embrace AI-driven advertising. Using Criteo’s OneTag with Dynamic Loader, it tracked user interactions from course sign-ups to consultation requests across its site. Criteo’s AI then used this data to build detailed customer profiles, optimising ad targeting and bidding strategies in real time.

The result was a 138% jump in conversion rates and a 34% boost in average order value. By letting AI refine its advertising approach, Yanadoo turned insights into action, proving that smart targeting can make all the difference. Now, the company is doubling down on AI-powered full-funnel campaigns to drive even more growth.

Platforms are now increasingly using AI to deliver personalised ad experiences by analysing user behaviour, preferences, and demographics. For example, Google Ads uses generative AI to create customised ads based on user search intent and behaviour. Adobe Advertising Cloud employs Adobe Sensei to tailor ad experiences across channels. Moreover, HubSpot marketing embraces AI to personalise content based on customer relationship management (CRM) data. 

Criteo’s Game of Ad Strategies 

A crucial differentiator for Criteo, a France-based advertising company, has always been its strategic use of data. The company collaborates with publishers, advertisers, and agencies to extract valuable insights, which helps optimise ad placements and product recommendations. 

Initially, this focused on retargeting, where users who visited a website would later see ads prompting them to return and complete a purchase. Today, a significant part of Criteo’s business is in commerce or retail media, where ads are placed directly on retailer websites.

Recognising the potential of AI-driven insights, Criteo launched the Criteo AI Lab in 2018. With a team of 80-85 experts at the time, the lab focused on advancing AI technology to enhance the accuracy of Criteo’s advertising engine. 

In an exclusive conversation with AIM, Liva Ralaivola, VP of research at Criteo, explained, “Today, with intense competition in ad tech, the challenge is to leverage AI advancements such as generative AI and deep learning to offer even more effective solutions for publishers, brands, and advertisers.”

Every Ad has AI

Ralaivola mentioned that every campaign Criteo runs incorporates AI. For instance, a user visits the National Basketball Association’s (NBA) website to check game scores. As they browse, an ad appears, perhaps for basketball shoes, travel deals, or other relevant products. 

Behind the scenes, AI powers three critical processes in milliseconds.

To begin with, Criteo and other demand-side platforms compete in real time to purchase ad space. AI evaluates the potential return on investment based on page content and user behaviour, determining how much to bid.

Once the space is secured, an AI engine analyses the user’s profile, page context, and browsing history to determine the most relevant product to display.

Lastly, AI decides the most effective way to present the ad, whether a simple image, an interactive banner, or a visually engaging layout, to maximise user engagement.

All of this happens in under 100 milliseconds, ensuring a seamless experience for the user.

“At the core of Criteo’s AI capabilities is DeepKNN, a universal AI engine that represents users, products, and web pages in a unified space. This allows Criteo to seamlessly connect consumers with relevant content, making digital advertising more efficient and personalised,” Ralaivola further said.

What’s Next?

AI enables marketers to go beyond traditional segmentation and refine how they identify and target audiences.

For example, a common approach might be to target individuals under 40 who are likely to have children. While direct data on such predictions may not always be available, agencies and retailers often possess insights that help in audience identification. Traditionally, this process relies on human judgment and available demographic information.

However, AI extracts insights in a fundamentally different way. Instead of simply categorising people based on predefined traits, AI can identify patterns in behaviour and preferences to compute highly precise audience segments. While it may not always be easy to label these groups in traditional demographic terms, AI significantly enhances targeting accuracy.

Addressing concerns about AI replacing human decision-making, Ralaivola noted, “A key term gaining traction is augmented intelligence, where AI complements human expertise rather than replacing it. Even as AI refines audience selection, the business knowledge and strategic understanding provided by humans remain crucial.” 

It’s similar to how medical advancements build on foundational knowledge. AI processes massive volumes of data, but human expertise continues to guide decision-making.

]]>
How Private Firms Are Leading the ‘Make in India’ Shift in Defense https://analyticsindiamag.com/ai-features/how-private-firms-are-leading-the-make-in-india-shift-in-defense/ Sun, 16 Mar 2025 04:30:00 +0000 https://analyticsindiamag.com/?p=10166087 “If I don’t use AI, the competition, or worse, the enemy will use it, and they will have a leap over us.”]]>

For decades, India’s defense sector relied heavily on foreign suppliers for advanced weaponry and military technology, earning the country the distinction of being the world’s largest arms importer. However, a quiet yet powerful shift is now underway, as private Indian defense companies are gradually replacing foreign players by developing indigenous solutions. 

With domestic production hitting ₹1.27 lakh crore and exports growing 30 times in a decade, the transformation has only enabled players such as Zen Technologies to thrive for more than three decades. 

Ashok Atluri, chairman and managing director of Zen Technologies, highlighted this change, emphasising how policies like Indigenously Designed, Developed, and Manufactured (IDDM) and Make-2 have enabled Indian companies to step up. 

“We were the guys who proposed ‘Buy Indian-IDDM’. You should not focus only on manufacturing, you should focus on owning the IP of the product. That was the way it started, and it changed the whole ecosystem,” said Atluri, in an exclusive interaction with AIM.

Encouraging Private Defense Players

Atluri believes that one of the most significant turning points in this shift came in 2014-15, when the late Manohar Parrikar, then Indian defense minister, introduced IDDM and Make-2 policies. Before this, procurement rules favored foreign vendors, requiring multiple suppliers for any government contract.

“Till Parrikar came, they used to ask for two or three vendors. But he said, ‘If under IDDM, even if there is one company, I will buy it.’ This encouraged a lot of companies to get deep into something, put in their money, make the product, and go to the Army and say, ‘Listen, I made this,’” he said.

This shift meant that Indian companies could now secure contracts even if they were the only domestic supplier, as long as they owned the IP. This incentivised R&D and reduced reliance on foreign manufacturers.

Simulation to Combat

India has made significant strides in military simulation, anti-drone technology, and advanced surveillance systems—areas previously dominated by international firms. Zen Technologies, for instance, has pioneered simulation-based training solutions and counter-drone technology.

Atluri pointed out how simulation has become an indispensable part of modern defense. “Simulation has suddenly become a buzzword, not only because of the training advantage and skill-building but also in terms of sustainability.” He cited research that found that a ₹15 crore investment in simulation training can result in ₹385 crore in long-term savings, making a strong case for indigenous solutions.

Founded in 1993 and headquartered in Hyderabad, the publicly-listed company is led by Atluri. The company develops advanced military simulators, anti-drone technologies, and live combat training equipment. With a strong focus on indigenous innovation, Zen Technologies has secured multiple government contracts and expanded its presence globally. In FY 24, the company reported a revenue of close to ₹500 crore.

AI in Defense 

Beyond manufacturing, AI is playing an increasing role in modern warfare, and India is rapidly integrating AI into its defense strategy.

“We use AI extensively, for example, in identifying targets, video tracking, and even drone surveillance,” Atluri explained. AI-driven border security systems, autonomous drone defense, and battlefield simulations are becoming integral to national security.

He also highlighted the necessity of staying ahead in AI-driven warfare. “If I don’t use AI, the competition, or worse, the enemy will use it, and they will have a leap over us.”

However, concerns around over-reliance on AI in military decision-making remain. While some fear AI might take over critical defense functions, Atluri believes AI should be used primarily to reduce risks for soldiers. “We keep an automated gun somewhere so that soldiers don’t have to expose themselves. AI takes the enemy out of harm’s way.”

Challenges for Private Players 

While private defense players are making significant inroads, the sector still faces several challenges. Investor skepticism, high capital requirements, and slow procurement processes continue to pose hurdles.

Atluri stressed the importance of quick government procurement to sustain innovation. If you buy a product within a year, that company will survive and reinvest in product development. But if you drag it for three years, the company might not survive. It’s a great national loss.”

Another major challenge is sourcing components. While India is making progress in self-reliance, certain critical components still need to be imported. Atluri emphasised the importance of transparency in sourcing foreign components, stating that if certain materials are unavailable domestically, companies should clearly communicate this to the armed forces and seek guidance on the best course of action.

As Indian private defense firms strengthen their domestic presence, they are also expanding internationally. 

“Africa is one region where we are really, really active. The second is the Middle East. And, now, the third is the CIS (Commonwealth of Independent States) countries because the Indian and Russian equipment is the same,” said Atluri, who also believes that this has opened up opportunities for Indian firms to compete directly with global defense giants.

“If you create a world-leading product here, you are not just selling in India, you are becoming a global champion,” he said.  

Moreover, Indian defense companies are also entering the US market, a major milestone in positioning India as a global arms supplier. “We have started getting into America. We have set up an office there, and we are trying to get into that market. The export market is something we are very excited about,” Atluri concluded. 

]]>
Bridging the Gap: How India’s L&D Leaders Are Shaping the AI-Ready Workforce https://analyticsindiamag.com/ai-features/bridging-the-gap-how-indias-ld-leaders-are-shaping-the-ai-ready-workforce/ Sat, 15 Mar 2025 10:30:00 +0000 https://analyticsindiamag.com/?p=10166073 Employees need to develop new technical skills to live in the intelligence age. ]]>

As 2025 unfolds, India’s learning and development (L&D) leaders race to bridge the gap between today’s workforce and tomorrow’s intelligence age

Among the leaders driving this transformation is Srikanth Vachaspati, the vice president and head of people at Siemens Technology and Services. Besides empowering employees to take charge of their careers, Vachaspati has also positioned Siemens as a leader in preparing its workforce for an AI-driven future.

In contrast with conventional L&D models that frequently enforce top-down training initiatives, Vachaspati’s method strongly focuses on employees. He has integrated microlearning, peer coaching, and hands-on experimentation into Siemens’ L&D strategy, facilitating skill acquisition without interrupting productivity.

Recognising the power of engagement in effective learning, Vachaspati employed gamification and competition along with social learning, fostering collaboration and knowledge-sharing among peers to create a sense of community and collective growth.

His team has been using Siemens’ Learning Management System (LMS) to bridge the gap between academic learning and industry application. 

Under Vachaspati’s leadership, Siemens Technology and Services has invested significantly in upskilling its workforce. “Over 95% of trained employees have collectively invested over 3 lakh learning hours, with 61% focused on technology and market trends, 21% on functions and methods, 10% on leadership skills, and 8% on personal and interpersonal development,” he said. 

To keep the workforce agile, innovative, and future-ready, he has forged partnerships with leading universities and research institutions. His innovative use of technology has set a new standard for companies to prepare their workforce for the future.

Using AI to Personalise and Scale L&D

As AI takes the lead, Vachaspati is not alone on this journey. 

From Sudheendra Naganur at Bosch Global Software Technologies, who has led AI-powered learning paths, to Joanna Orkusz, talent development and learning leader at EY Global Delivery Services, who has gamified AI upskilling for thousands, visionary leaders across industries are driving innovative L&D initiatives to shape the future of the workforce. 

AIM spoke to some of these industry stalwarts to understand how their organisations are preparing for this change. 

One such leader is Tredence CHRO Rekha Nair, who, with her hackathons, ideathons, and accelerator projects, reinforces real-world impact, bringing a unique perspective to L&D. Nair describes the L&D function as the company’s ‘growth engine’ as these initiatives are tightly linked to talent retention and client confidence. 

Bosch Global Software Technologies (BGSW) has built its Talent Hub (T-Hub), an AI-powered platform offering learning paths tailored to employees’ roles and career stages. The company states that it can deploy these programmes within three weeks—a response to the demand for speed in upskilling.

Orkusz, talent development and learning leader at EY GDS, reflects on her two decades in the field: “I’ve always tried to create workplaces that enable people to reach their full potential.” In 2025, this mission feels more urgent to her, and the industry at large. 

Meanwhile, leaders like Shalini Modi, head of learning at Genpact, Shefali Sharma Garg, chief talent officer at Publicis Sapient India, Roopa Bharvani, vice president of human resources at Fiserv and Rohin Nadir, chief learning officer at KPMG India, stand out as trailblazers who have redefined their organisation’s approach to talent development aligning with AI. 

Garg has built a comprehensive learning ecosystem that integrates AI into onboarding and leadership programs. 

Bharvani has prioritised upskilling at every level, from entry roles to leadership, focusing on domains like data science, cloud computing, and AI. Partnering with platforms like Udemy and Cornerstone, along with initiatives like Tech Thursdays, she and her company have fostered a culture of continuous learning and innovation.

On the other hand, Nadir decided to leverage gamification and global initiatives like 24 Hours of AI to create a buzz around AI upskilling, and Modi at Genpact has integrated AI-driven tools to create hyper-personalised learning experiences. 

In January, the World Economic Forum’s Future of Jobs Report 2025 flagged upskilling as a key area of focus for organisations worldwide. As per a recent report, 85% of professionals planned to invest in upskilling this year as confidence in job retention dropped to 62%, a nine-point decline from last year. 

“The half-life of skills is shrinking,” said Modi. “Completely new roles are emerging, and L&D needs to be tightly aligned with business strategy.” For leaders like her, upskilling is an immediate imperative. 

Genpact has developed Genome.ai, a platform that maps skills against future business needs. It helps personalise learning journeys and includes an AI-powered virtual coach (AI Guru).

Rasesh Shah, chief practice officer for edtech at Fractal, takes a similar approach at his organisation by combining AI-powered learning with adaptive coaching. Fractal’s programme, Generative AI Learning for All, includes simulations and real-time feedback.  

Meanwhile, Fiserv’s AI Academy offers tailored learning in areas like machine learning, generative AI, and MLOps. Partnering with Udemy and Cornerstone, it delivers scalable, accessible training. Initiatives like Tech Thursday forums and sandbox environments foster continuous learning and collaboration.

Moving Away from One-Size-Fits-All Training

According to Anthropic, 36% of jobs now use AI for a quarter of tasks, suggesting that generic training won’t cut it anymore. 

Publicis Sapient uses curated learning boards and a chatbot to guide employees. The company claims that in 2024, 17,000 employees completed GenAI training with 96% learning absorption. It also plans to build a 2,000-person AI engineering team through PS Slingshot, their proprietary AI platform.

EY GDS segments AI learning into three roles: enthusiasts, technologists, and account executives. Over 70,000 of its employees have been trained, with 95% AI readiness and 32,000 AI badges awarded.

Tredence runs its TALL programme, offering ‘personalised, career-stage learning’ with 82% of employees participating in structured learning. By 2025, over 1,000 will complete GenAI training and 30% of new hires will focus on GenAI and agentic AI. 

The results are tangible. Amisha Mittal, an assurance quality services manager at EY GDS, has seen the company’s AI-focused learning programmes change the way she works. “With a series of AI-centric initiatives, each has acted as a building block, contributing to my AI knowledge,” she said, commenting on how the tools have become an integral part of her daily work.

]]>
Is MCP the New HTTP for AI? https://analyticsindiamag.com/ai-features/is-mcp-the-new-http-for-ai/ Thu, 13 Mar 2025 13:57:49 +0000 https://analyticsindiamag.com/?p=10166051 Anthropic’s Model Context Protocol is a standard for connecting AI assistants to the systems where data lives.]]>

What if there was a USB-C port for AI applications—a universal connector for AI systems? Meet Anthropic’s Model Context Protocol (MCP), the newest kid on the block. This open-source protocol allows different AI models to connect with the same tools and data sources, much like standard ports enable different devices to work together. 

With the curiosity surrounding it, there is a surge in people talking about MCP, its benefits, and how it can make things convenient for developers. Could it be the torchbearer in accelerating the ease of AI integration?

What is MCP?

Simply put, MCP is a standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. It aims to help frontier models provide more relevant responses. 

There are three components of the protocol for developers: the MCP specification, local MCP server support, and an open source repository of MCP servers. It follows a client-server architecture, where a host application can connect to multiple servers. 

Santiago Valdarrama, a computer scientist, explains it as an extra layer when it comes to connecting AI agents to services like Slack, Gmail, or a database, on top of how it works traditionally. He said that MCP reduces complexity, even though it is an added layer.

Valdarrama further explains that the extra layer is an MCP server, which makes it possible for developers to replace the AI agent, and still make the integrations work without any extra work. One can use this to add improved functionality to AI coding tools like Windsurf or Cursor.

Is it the Same as APIs?

In an X thread, Valdarrama explained that MCP is not just another API lookalike. An API exposes its functionality using a set of fixed and predefined endpoints, such as products, orders, or invoices.

Whether you want to change the number of parameters for such endpoints, or add new capabilities to an API, the client will also need modifications.

However, when dealing with MCP, Valdarrama said, “Let’s say you change the number of parameters required by one of the tools in your server. Contrary to the API world, with MCP, you won’t break any clients using your server. They will adapt dynamically to the changes!”

He added, “If you add a new tool, you don’t need to modify the clients either. They will discover the tool automatically and start using it when appropriate!”

It is as Boring and Exciting as HTTP

Matt Pocock, an AI educator, finds MCP both boring and exciting at the same time—for the same reasons that tech like REST, HTTP, SOAP, and GraphQL got traction. He added that MCP helps reduce friction and makes LLMs cooler.

Robert Mao, founder of ArcBlock, a platform to help build decentralised apps, also shared the sentiment. “HTTP is a protocol for browsers, while MCP is a protocol designed for AI,” he wrote on X.

Use Cases of MCP

There have been numerous developments by companies and individuals leveraging MCP.

Perplexity has built an MCP server for Sonar, its AI answer engine, to enable AI assistants to provide real-time web search research capabilities.

Composio, an AI startup that helps build AI apps, launched fully managed MCP servers with auth support. This will help integrate several apps like Google Sheets, Zoho, Salesforce, and more with AI coding platforms like Cursor, Windsurf, and Claude Code desktop app easily.

A developer integrated Cloudflare’s MCP-worker to 10x his Cursor’s workflow experience. While another one made an MCP server with tools for accessing all models on Replicate, a platform to run and deploy AI models. It was further connected through Claude to generate art.

Google’s Firebase, a mobile and web app development platform, integrated MCP support to its AI framework, Genkit. Cline, an autonomous coding agent, lets you build and use MCP servers. LangChain also introduced MCP adapters to allow its agents to connect to tools in the MCP ecosystem.

Not just limited to MCP’s popularity in terms of usage, the concept encouraged IBM to introduce a similar protocol, Agent Communication Protocol (ACP), which could also be a signal to the protocol solving something useful.

At the same time, there have been some mixed reactions. When a user on X asked Andrej Karpathy, founder of Eureka Labs, for his thoughts on MCP, he said, “Please make it stop.”

Learn more about the technical aspects of MCP on its documentation website.

]]>
‘We Can’t Just Upload Docs to Any LLM,’ Godrej Capital CTO on Building Saksham AI https://analyticsindiamag.com/ai-features/we-cant-just-upload-docs-to-any-llm-godrej-capital-cto-on-building-saksham-ai/ Thu, 13 Mar 2025 07:00:00 +0000 https://analyticsindiamag.com/?p=10165959 Jyothirlatha explained how the company’s Saksham AI platform is redefining business operations, customer service, and decision-making processes.]]>

Given the sensitivity of financial data, security is at the heart of Godrej Capital’s AI strategy. At the forefront of this is Jyothirlatha B, CTO of the firm, who is leading the charge for making sure AI is engrained in every part of the firm’s functioning, while ensuring data remains secure.

“We can’t just upload documents to any LLM,” Jyothirlatha told AIM. “Our platform ensures strict data security policies, controlling what data can be used and how it is processed.”

Citing concerns over open source models, even the likes of China’s DeepSeek, Jyothirlatha said that rigorous security evaluations are conducted for every model in the financial sector. “If a model passes security checks, only then do we allow it for R&D,” she said.

At Godrej Capital, AI is not just an innovation but a part of everyday workflows. Jyothirlatha explained how the company’s proprietary Saksham AI platform is redefining business operations, customer service, and decision-making processes.

Godrej Capital refers to generative AI as ‘Everyday AI’—a tool embedded in daily operations. According to Jyothirlatha, AI in small places can show great value. Whether it’s reading large documents like bureau reports or disbursement request forms, AI helps extract insights in an easy and efficient way.

AI at Godrej

At its core, Saksham AI is a flexible and scalable AI platform designed to be used across various business functions. Instead of building AI applications from scratch for different use cases, Godrej Capital employs a modular approach, where a single API can power multiple workflows. 

“Saksham AI provides reusable components so teams don’t have to reinvent the wheel,” Jyothirlatha said, adding that the platform offers different open source models like Llama or Mistral.

The team is also experimenting with models from Indian providers like Sarvam AI to test its efficiency. “Their performance has been promising, especially in regional languages,” she revealed.

Commercial solutions from AWS Bedrock are also key to the offerings. “There’s no one-size-fits-all LLM,” she explained. “For every use case, developers can experiment with different models to find the best-performing one in terms of accuracy and cost.”

Recently, Snowflake announced that Godrej Capital uses its AI Data Cloud as a unified platform to enhance financing solutions. This integration has streamlined the customer loan journey, enabling predictive insights and personalised services for thousands across India. 

Godrej Capital’s platform engineering team plays a crucial role in evaluating AI models. “They continuously test new models and shortlist the best ones for different use cases,” Jyothirlatha said. 

This ensures that developers working on business applications can simply plug into the best-performing AI model without worrying about underlying infrastructure, security, or cost optimisation.

One of the key capabilities of Saksham AI is document summarisation, allowing business teams to process large amounts of data quickly. This feature extends beyond financial documents, assisting credit managers and customer service representatives to analyse customer interactions and improve service quality.

“When call centre agents interact with customers, it’s impractical to manually review all calls for quality checks,” Jyothirlatha explained. Saksham AI automates call analysis, rating conversations, and identifying areas for training. 

The same capability applies to credit managers who make verification calls, to ensure consistency and compliance in customer interactions.

The platform also powers Customer 360, an AI-driven dashboard that consolidates relevant insights for agents handling customer queries. “Instead of bombarding an agent with excessive data points, we provide summarised insights that improve decision-making,” she added. 

AI adoption is not limited to customer service and risk management; it extends to the developer ecosystem as well. “We actively use GitHub Copilot for code reviews and security checks,” she said. While it’s an excellent tool, Jyothirlatha acknowledges the cost considerations in scaling it across all developers.

India’s AI Future

The debate over whether India should develop its own foundational AI models or not has also reached Jyothirlatha. While some experts argue for investing in data marts, Jyothirlatha believes both are essential. “No LLM can be successful without quality training data,” she asserted. 

She sees an opportunity for India-specific LLMs trained on financial data from government initiatives like IndiaAI Mission and Unified Lending Interface (ULI).

One of the most significant impacts of AI at Godrej Capital is in fraud detection. “We analyse bank statements and multiple documents to identify anomalies,” said Jyothirlatha. While she couldn’t disclose exact figures, she hinted at AI significantly improving fraud detection rates.

Saksham AI is more than just an AI platform—it’s a developer playground where teams can experiment with AI models securely and efficiently. “It reduces technical debt by providing ready-made software development kits (SDKs), security frameworks, and API gateways,” Jyothirlatha explained.

Currently, Saksham AI is exclusive to Godrej Capital. However, there are plans to expand it across the entire Godrej Group. “It’s not just for our organisation, it’s a scalable platform that can serve multiple Godrej businesses while maintaining security and compliance,” Jyothirlatha revealed.

Whether through hiring AI engineers, upskilling employees, or adopting new AI models, the company is embracing AI at every level. “Smart people will always be needed to train and refine AI agents,” she concluded. “Humans will always be in the loop.”

]]>
Coding Interviews are Becoming a Joke https://analyticsindiamag.com/ai-features/coding-interviews-are-becoming-a-joke/ Wed, 12 Mar 2025 12:30:00 +0000 https://analyticsindiamag.com/?p=10165932 If companies are so stringent about proctoring, how do some candidates manage to clear interviews using AI?]]>

A Reddit user shared that after spending three months in the free pool with no projects, he began job hunting, only to find that every Java backend role required over four years of experience. Meanwhile, at his current company, Wipro, he was required to take unproctored competency tests. Despite the challenge, he was able to complete these tests successfully using online resources.

However, during his final proctored test, he confronted an unfamiliar IDE which lacked the code-completion features of IntelliSense. To tackle the problem, he used his local integrated development environment (IDE) before submitting the test. Days later, HR accused him of malpractice. Despite his detailed explaination on the matter, they refused to reconsider their decision and forced him to resign immediately without a second chance or even a proper notice period.

This incident raises an important question: If companies are so stringent about proctoring, how do some candidates manage to clear interviews using AI? 

Recently, a LinkedIn user highlighted how AI-powered tools are helping candidates breeze through coding interviews. “Are coding interviews becoming a joke? Just came across an AI tool that…can auto-hide when screen sharing and stay invisible, generate natural reasoning for ‘your’ solution, and simulate real eye movements to bypass monitoring. And guess what? It’s open-source,” he said.

His concern was clear: If AI can ‘clear’ coding rounds for candidates, are companies truly assessing real skills? Even if one AI tool is blocked, another will surface. So what’s next? Should companies rethink hiring strategies altogether, move to live pair-programming, go back to whiteboard interviews or let AI interview AI?

In response, a user on LinkedIn with three decades of coding and over 25 years of teaching experience, said that Leetcode, Codepen or Hackerrank are simply not good enough. “If you’re hiring and you cannot give me two hours of your time for a coding session, or you don’t trust me enough to hand me an email challenge…Why should I commit to you?”

This sentiment resonates with many in the developer community, who feel that technical interviews have become more of a formality than a true test of skills.

Hear it From Experts

In a conversation with AIM, Pratham Patel, a member of Rocky Enterprise Software Foundation, shared his perspective. “As an interviewee, I can say…that the interview process has become more of a formality than an actual test. It is unfortunate to see that interviewers are more interested in whether the candidate can provide code with flawed reasoning rather than understanding if they truly grasp programming and the art behind it.”

Meanwhile, Krishna Vij, vice president of IT staffing at TeamLease Digital, pointed out that AI is pushing companies to rethink their hiring strategies.

“How we assess technical talent is evolving and AI-driven tools are accelerating that shift. If a candidate uses AI to clear a coding test, we must ask ourselves: Are we truly measuring their skills or just their ability to leverage technology? This is why companies must rethink their hiring strategies, as standard coding tests alone are no longer sufficient in the current situation.”

Vij added that the company is seeing a stronger push towards live coding interviews, project-based assessments, and in-depth problem-solving discussions. The focus now is on critical thinking, adaptability, and real-world application rather than just syntax and speed. Hiring processes must keep evolving because as AI gets smarter, so must the way companies evaluate talent.

Similarly, Rahul Veerwal, founder and CEO of GetWork, reinforced this perspective and stressed the need for multi-layered assessments.

“At GetWork.ai, we’ve seen firsthand how AI can enhance hiring, but we also recognise its potential for misuse. While AI-assisted cheating poses a challenge, the solution isn’t to abandon coding rounds but to evolve how we assess talent,” he explained. 

According to Veerwal, this is why GetWork believes the future of hiring lies in AI-proctored, dynamic assessments that test a candidate’s real problem-solving skills, not just their ability to recall syntax. Moreover, structured follow-ups like automated technical interviews can reveal a candidate’s practical knowledge beyond AI-aided solutions.

“We also see immense potential for AI in levelling the playing field for job seekers, especially from tier-2 and tier-3 cities. Our GenAI copilot, Horizon AI, helps candidates prepare for interviews, upskill, and build confidence, without crossing ethical boundaries,” he said.

He stressed that companies have to move beyond one-dimensional coding tests. Implement multi-layered assessments, use proctored environments, and, most importantly, assess problem-solving approaches over perfect code output.

What’s Next?

Only about 10% of Indian engineering graduates possess adequate coding skills, according to a 2019 report by Aspiring Minds. More recently, a study by TeamLease found that merely 5.5% of Indian engineers are qualified with basic programming abilities.

Source: Statista

Furthermore, the Equinix 2023 Global Tech Trends Survey revealed that 86% of Indian businesses are actively reskilling their IT workers to address the industry’s needs. 

This proves that there are challenges in coding skills among Indian engineers, and so there is a strong demand for skilled professionals in emerging technologies. Hence, engineers must to upskill to stay in the race. 

]]>
Hyperpersonalisation and Stickiness—The Buzzwords AI Startups Can’t Ignore https://analyticsindiamag.com/ai-features/hyperpersonalisation-and-stickiness-the-buzzwords-ai-startups-cant-ignore/ Wed, 12 Mar 2025 11:00:00 +0000 https://analyticsindiamag.com/?p=10165928 Crystal Huang said never before has she seen so many companies pick up crazy traction at launch, get to tens of millions of annual revenue, and have people lose interest in a short while. ]]>

AI startups ride the AI wave, gain traction, secure investment, and launch operations. If only it were that simple—some fade into irrelevance before they even get started!

What could be the reason behind these failures? What are the potential solutions? What are investors looking for in an AI startup in 2025? The questions are aplenty. Crystal Huang, general partner at Google Ventures, shed light on these in Google’s ‘Future of AI: Perspectives for Startups 2025’ report

Boom and Bust Cycle is Going to Continue

Huang said never before has she seen so many companies pick up crazy traction at launch, get to tens of millions of annual revenue, and have people lose interest in a short while. She explained that since the tooling is so widely accessible for building an application this kind of cycle will continue.

“It’s difficult and costly to build foundation models so not many teams will be able to do it, but at the application layer, I think there’s going to be tons of disruption and rebirth,” she stated. 

Huang further added, “It’s exciting that the 2025 landscape will look nothing like last year, and while generative AI is obviously an exciting territory for investors, the standard valuation framework still applies,” Huang added.

Stickiness as a Metric

Huang noted that she will be looking for products that are stickier. “If your product is easy to implement, it’s just as easy to uninstall. Products need to be stickier to create lasting value, which means being both indispensable and deeply integrated into the user’s workflow,” she explained.

Even though she sees all the urgency in AI, she believes that it often requires mutual effort between platforms and enterprises to create automation or workflows, and gaining difficult access to enterprise data to significantly boost performance.

Huang also highlighted that the fastest growth is observed in individuals willing to experiment with something new for $15 a month over an enterprise investing $20 million a year in legacy systems.

There have been talks about companies like Duolingo, which took the gamification route to add stickiness to their product. Startups can learn from such examples to add value to their products.

Hyperpersonalisation is the Future

Huang highlights a crucial shift in the AI landscape: the journey toward hyperpersonalisation. 

While the promise of AI-driven, tailored experiences—from marketing to healthcare—has long been touted, its widespread adoption has been hindered by excessive costs. However, this is rapidly changing. 

“Training expenses are dropping as smaller and more domain-specific models emerging, and inference costs are plummeting across the board,” Huang pointed out. This cost reduction is enabling personalised AI applications, which were once economically unfeasible, further helping AI companies cultivate user loyalty.

Huang emphasises the increasing sophistication of enterprise clients in AI. CIOs and CTOs are no longer swayed by mere novelty; they demand demonstrable ROI and a clear competitive edge. The commoditisation of certain AI capabilities means that companies must continually innovate and adapt. 

This dynamic environment necessitates AI startups to move beyond simply securing funding and focus on building robust, revenue-generating products with defensible moats.

Matthieu Rouif, co-founder and CEO of Photoroom, added in the report, “AI will understand and adapt to human emotion. It will get better at understanding what triggers emotions in humans, allowing for stories and content to be personalised and adapted to individual emotional responses.”

David Friedberg, CEO of Ohalo Genetics, states that AI will also change the media landscape with personalised movies and video games with content generated on the fly, altering the value proposition.

Taking into account the views of some influential industry leaders, AI startups can benefit from having a clear understanding of what to focus on next to really connect with their audience and make their product shine. 

]]>
But Why Did Microsoft Port TypeScript to Go Instead of Rust? https://analyticsindiamag.com/ai-features/but-why-did-microsoft-port-typescript-to-go-instead-of-rust/ Wed, 12 Mar 2025 07:55:48 +0000 https://analyticsindiamag.com/?p=10165914 “If you're coming from JavaScript, you're going to find a transition to Go a lot simpler than the transition to Rust.”]]>

Microsoft is all set to port the TypeScript compiler and toolset to Go, achieving 10x faster compile speed across different codebases. Though developers largely praised the announcement, some expressed disappointment because Microsoft chose Go instead of Rust to port the TypeScript compiler. 

A user on X summed up the overall sentiment perfectly. “More shocking than TypeScript getting 10x speedup is they didn’t write it in Rust,” he said. 

“In a blink of an eye, Java vs C# debates have turned into Rust vs Go debates. Special thanks to TypeScript for making this happen,” another said

Microsoft is rewritting TypeScript compiler in… Go.
byu/rodrigocfd inrustjerk

As the displeasure poured in, Ryan Cavanaugh, a lead developer of TypeScript, clarified the stance, admitting that they had anticipated a debate over this. He said that while Rust was considered an option, the ‘key constraint’ was portability, which ensured that the new codebase was algorithmically similar to the current one. 

He also revealed that multiple ways were explored to represent the code so that rewriting it in Rust would be manageable. But they ran into ‘unacceptable’ trade-offs with performance, and ergonomics. Some approaches required implementing their own garbage collector (GC) and adding additional complexity. 

This was in contrast with Go, where it automatically recycled memory, or what is called ‘garbage collection’. “Some of them came close, but often required dropping into lots of unsafe code, and there just didn’t seem to be many combinations of primitives in Rust that allow for an ergonomic port of JavaScript code,” said Cavanaugh. 

He explained that the team ended up with two options. One was to do a rewrite from scratch using Rust, which he said could take ‘years’ and still yield an incompatible version of TypeScript ‘no one could use’. Second, build a usable port in Go within a year, which is ‘extremely’ compatible in terms of semantics, while offering competitive performance. 

Cavanugh also indicated that Go, like Rust, has excellent code generation, data representation capabilities, and ‘excellent’ concurrency primitives. 

The same applies to Go, he explained, but given the way they had written the code so far, it turned out to be a surprisingly good fit for the task.

“We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code,” he added in a post on GitHub

‘Transition to Go is a Lot Simpler than Transition to Rust’

In an interview, Anders Hejlsberg, the lead architect of TypeScript, largely reiterated Cavanaugh’s remarks. 

He said the only way the project would be meaningful is porting the existing codebase as is. The original codebase was designed with certain assumptions – and the most important one was the presence of automatic garbage collection. 

“I think [that] pretty much limited our choices, and started to heavily rule out Rust,” said Hejlsberg, indicating the lack of automatic memory management. 

Another challenge with Rust, as pointed out by Hejlsberg, is its strict limitations around cyclic data structures which the TypeScript compiler heavily relies on. The system includes abstract syntax trees (ASTs) with parent and child references, symbols and declarations that reference each other, and recursive types that naturally form cycles. 

It is important to note that TypeScript is built on top of JavaScript. “If you’re coming from JavaScript, you’re going to find a transition to Go a lot simpler than the transition to Rust,” said Hejlsberg. 

He also said that the transition is super gentle on the system, and isn’t a “super complicated” language with an awful lot of ceremony. “Which I would say Rust comes a lot closer to,” he added. 

]]>
Why Can’t AI Features Be Turned Off in Some Apps? https://analyticsindiamag.com/ai-features/why-cant-ai-features-be-turned-off-in-some-apps/ Wed, 12 Mar 2025 05:37:39 +0000 https://analyticsindiamag.com/?p=10165886 Not everybody can be pleased. The goal is to please as many people as possible.]]>

Every day, it feels like founders come up with a new adjective to describe the potential of artificial intelligence. And they’re not wrong—AI has truly demonstrated its transformative capabilities over the past few years. 

As a result, everyone wants to add an element of AI in their applications, and it’s now making its way into products used across generations. Whether it’s an email client, messaging apps, or a suite of AI features that show up after updating smartphones. 

But is it fair not to let users disable these additions or force them to go through a cumbersome process of hunting for buttons buried deep within menus to turn them off? 

Such has been the case with certain applications, and several users have expressed their frustrations on social media. While some have been unable to disable AI features, others are struggling to find the settings to do so, on products offered by companies like Meta, Microsoft, Google, Apple, xAI, and so on. 

A controversial example is Snapchat, which introduced the ‘My AI’ chatbot. Despite backlash, the company only allowed users with a Plus subscription to disable it, sparking further frustration among non-subscribers.

While most of these products let users opt out of sharing their data for the purposes of training, some don’t want these features to clutter, or interfere with their usage experience. 

To understand the rationale behind such strategies, AIM spoke to Karan Peri, an independent product advisor, with over a decade of experience in product management at companies like Microsoft, Coinbase and Amazon, among others. 

‘Trying to Please Everyone Will Only Lead to Failure’

When the company decides to introduce a feature, it is almost always based on an A/B test, which involves testing two product versions among the users. 

The one that rates better on an aggregate makes it to the final version. Thus, in this process, the preference of individual users isn’t considered, only the majority’s. 

Peri stated that if a feature does not provide value for a long time, good product teams will either iterate or turn it off, especially if the backlash is detrimental to the engagement of the product.

However, when companies have to consider the feedback of a minority section, and end up adding more knobs and controls to the feature, changes to the code base can increase the maintenance costs. 

“If you go on a path believing that you want to please everybody, you will fail. That means you’re dropping your quality to the least common multiple. You keep dropping till it pleases everybody,” Peri added. 

For instance, a 75-year-old user of a popular messaging app is frustrated by the AI features she can’t disable. She falls into the third group of people who dislike the feature, alongside power users in the first group who love it, and the second group consisting of those who don’t care about the feature. 

“But the first two buckets are way larger than the third bucket, so the company doesn’t care,” Peri explained. 

Nevertheless, if the messaging app occupies almost a monopolistic position in the industry, its competitive position is advantageous. “If not for that app, where would the 75-year-old woman go?” he questioned. “She may not go to another app because her grandchildren, and other family members aren’t there. Whether she likes it or not, she is the product has her hooked.” 

Hence, certain companies will continue to retain these features. 

Moreover, certain features may not appeal to people initially, considering the cognitively heavy task of understanding how it works. However, users eventually end up embracing it. Similarly, companies often aim to integrate these features within the fabric of the product, like Netflix, for example, where the AI-enabled recommendation engine has become a crucial factor in the user experience. 

For example, several users complain of the inability to disable Meta’s AI features on WhatsApp, Facebook, and Instagram. However, a report from The Information revealed that Meta AI daily usage is heaviest on WhatsApp and Facebook. Meta also revealed that the chatbot had about 185 million weekly users of Meta AI across their products, as of last year. 

Furthermore, despite Snapchat’s questionable choice of not letting users disable ‘My AI’ chatbot for free, the app has seen nothing but consistent growth in the number of active users over the years. 

That said, strategies vary across companies. For example, Apple Intelligence provides an option to turn the features off.

“If you buy a new iPhone, which is super expensive, you must ensure that the phone is liked. If not, then the entire line will get discredited,” said Peri, indicating that the scenario takes a turn when there is hardware, and money is involved. 

However, when Apple came under fire for its AI-enabled notifications features hallucinating and reporting fake news, the company eventually halted the feature

Are Companies Making a Good Case for Value Addition? 

“People are thinking about turning an AI feature off because they do not know how to get value from it,” Peri pointed out. 

“If it was valuable and people understood what it was doing, maybe lesser people would have asked, ‘How do I turn it off?’ You want to turn it off because it is useless,” he added, indicating that people on forums wouldn’t want to ask how to turn it off if it added good value. 

Several factors contribute to this scenario. The company likely did not instill enough thought into the launch, choosing to release it and observe user behavior without explaining how it works. Moreover, the instructions may not have been clear enough for users to understand the feature’s purpose and functionality. 

However, Peri said that good product teams often implement a feature in a way where education isn’t needed. 

In conclusion, it boils down to a single fact—Not everybody can be pleased. The goal is to please as many people as possible.

]]>
How Does TurboML Plan to Build a Foundational Model Under $12 Million? https://analyticsindiamag.com/ai-features/how-does-turboml-plan-to-build-a-foundational-model-under-12-million/ Tue, 11 Mar 2025 14:04:18 +0000 https://analyticsindiamag.com/?p=10165853 The key to achieving this also lies in assembling a world-class AI team, says CEO Siddharth Bhatia.]]>

Machine learning platform TurboML is among the frontrunners awaiting a decision from the IndiaAI Mission on its proposal to build foundational AI models trained on Indian datasets. 

As a first-time founder, TurboML’s Siddharth Bhatia spoke to AIM about his goal of building a home-grown foundational model for less than $12 million. 

China’s DeepSeek claims to have built its model for less than $6 million. Although Bhatia did not specifically address this figure, he pointed out that being part of the IndiaAI mission would grant access to subsidised compute rates through government vendors and contracts.

But, can a SOTA foundational model really be built in 6 months? 

Bhatia believes a timeline of 8-10 months is doable with reinforcement learning, synthetic data generation, and global collaboration.

The Indian problem is unique. Bhatia said India faces challenges primarily due to the lack of internet-scale data compared to countries like the US and China. Besides, linguistic diversity adds to our woes. 

Bhatia said that TurboML will take a phased approach, starting with smaller datasets and model parameters and scaling 10x at each stage. The roadmap involves progressing from less than 50 billion tokens initially to around 10 trillion tokens.

He also highlighted working in parallel to pre-train smaller models and generate more synthetic data for training larger models to make the timeline achievable.

IT minister Ashwini Vaishnaw expects India’s LLM to be ready in ten months. The government has set aside ₹2,000 crore for the IndiaAI mission and has received 67 proposals so far, including 22 for LLMs. The Ministry of Electronics and Information Technology  (MeitY) will keep accepting proposals until the 15th of each month for the next six months or until they have a sufficient number. 

AIM also spoke with IIT Madras professor Balaraman Ravindran, mentor to Perplexity CEO Aravind Srinivas, to assess if the timeline was plausible.

“I think six months is too aggressive a timeline for us to really build super capable models. What we are probably going to get are right or decent models; we are not going to shake the world,” he told AIM. Interestingly, IIT Madras has also submitted a proposal under the IndiaAI mission in collaboration with a startup.

Other players such as Sarvam AI, Krutrim, CoRover.ai, Zoho, LossFunk, Kissan AI, Soket AI Labs, and IIIT Hyderabad are also in the race to develop India’s next GenAI models under the mission. 

Building a ‘Global Team’

The key to achieving this also lies in assembling a world-class team. Through a post on social media, Bhatia called for remote AI researchers and engineers.

Coming to his hiring philosophy, Bhatia noted that the team is not limited to just Indian talent. They are looking for international experts from leading AI companies.

He said that the core team is remote, with a presence in India and the San Francisco Bay Area.

Origins of TurboML

Bhatia did his PhD in real-time machine learning at the National University of Singapore (NUS). This is also where he met and collaborated with his co-founder, Arjit Jain. “My co-founder was at IIT Bombay and had come for a research internship at NUS.” The duo started working on continual learning.

On his startup breakthrough, he said, “One of our projects got featured on the front page of Y Combinator’s Hacker News… people started using our research for different use-cases, and developers from Amazon, MasterCard, and Instacart implemented their own versions.

Subsequently, the news led to a lot of inbound interest from major companies, who asked for a product around their project, said Bhatia.

This also led to their transition from academia to entrepreneurship and consulting. Initially, they handled the demand through consulting engagements, working directly with companies on specific implementations. 

“We started working with a few companies just on a contractual consulting basis.” Bhatia noted that such engagements helped them understand real-world use cases and needs, beyond academic research.

This led them to start TurboML. 

The company focuses on continually incorporating new data and feedback to update ML and LLM models.

]]>
Chennai-Based YourTribe is the Matchmaker Startups Didn’t Know They Needed https://analyticsindiamag.com/ai-features/chennai-based-yourtribe-is-the-matchmaker-startups-didnt-know-they-needed/ Tue, 11 Mar 2025 11:46:41 +0000 https://analyticsindiamag.com/?p=10165838 YourTribe is aiming to help startups save investments on recruitment agencies]]>

Recruitment agencies play a crucial role in helping companies find the right talent. However, for many startups, especially in the early stages, hiring an external recruitment agency may not always be a feasible option. Now, a Chennai-based startup, YourTribe, is stepping in to help other startups identify and secure suitable candidates for their job roles.

Deepak Subramanian, founder of YourTribe, may not have a background in tech, but he brings 21 years of experience in the recruitment industry. AIM spoke to him to understand how he is trying to make a difference in the talent marketplace, and just how impactful YourTribe has been for startups it collaborates with.

What is YourTribe Doing Exactly?

According to Subramanian, YourTribe is a talent marketplace built specifically for startups and emerging brands. He highlighted that these two categories of employers require employer branding. Typically, they have no brand recall. On the contrary, mature startups that have raised funds get popular, and their name floats in the market, with jobseekers recognising them.

Subramanian pointed out that startups between the pre-seed and Series B funding stages often struggle with limited public relations (PR), making it difficult for job seekers to discover them.

This is where YourTribe comes into the picture, combining employer branding and AI to enable employers to hire talent. The company revealed that it uses AI-driven CV-matching combined with human intelligence. The use of AI for CV-matching aims to eliminate human bias and yet give the best matching resumes to the employers.

Subramanian shared some stories of success achieved by certain startups through their platform. Notably, Prudent AI, a Chennai-based bootstrapped startup, managed to scale from 10 members to an 80-member team. They were the first to adopt YourTribe’s recruitment as a service model, with a subscription-based pricing system. Subramanian highlighted that the startup saved at least 55% of investment compared to a recruitment agency.

YourTribe has also helped certain non-startups seeking expansion in India. Subramanian mentioned supporting companies like Zitro Games, a Spanish slot machine manufacturer.

He revealed that the top job roles in demand include full stack development, product management and go-to-market (GTM) roles.

The revenue model for YourTribe is subscription-based. Subramanian said startups can buy job roles or resumes. If they buy a job role, it is an end-to-end experience in which YourTribe helps them post their job and handle the job seekers until the interview is completed and the employee is onboarded. If startups buy resumes, YourTribe gives them matching and qualified leads.

Subramanian shared that the lifetime revenue from 2022 until present is about ₹9.7 crore. Moreover, there are plans to close at around ₹4 crore this year.

YourTribe’s Debut on Startup Singam & Funding

YourTribe recently raised their seed round on a new show called Startup Singam, a Tamil equivalent of Shark Tank which is streaming on JioHotstar. The company raised ₹4 crore with investors from the show.

Currently, the company is aiming to use the funds to hire senior staff and has plans to go back to the market for fund-raising after executing the planned tasks.

Subramanian disclosed that they are looking to raise ₹35-₹40 crore in series funding for a valuation of ₹150 crore.

What’s The Future of Employer Branding in the Age of GenAI?

“I think employer branding is going to be the key for any employer to hire in the coming years, especially with the age of GenAI,” Subramanian said.

He added that many people will ask ChatGPT about employers, where to go, and how to apply. Without a strong employer branding, companies may lose out on the opportunities to hire the right talent.

]]>
Cognizant, Are You Okay? https://analyticsindiamag.com/ai-features/cognizant-are-you-okay/ Tue, 11 Mar 2025 07:44:36 +0000 https://analyticsindiamag.com/?p=10165792 The company has been reducing its global office footprint, shedding 11 million square feet of space worldwide.]]>

Cognizant appears to be in slight trouble. In two of the last three years, the company’s revenue growth fell below the 5% mark, and in one of the years, it reported a negative growth. The outsourcing firm’s financials show some visible signs of distress emerging across its operations, which include giving away its workspace and also reducing headcount.

According to a recent report, Bagmane Group is in advanced talks to acquire the Chennai office property of Cognizant for approximately ₹800 crore. 

The campus, which has served as Cognizant’s India headquarters for over two decades, spans 13.39 acres and includes 5.6 lakh square feet of office space along Chennai’s bustling IT corridor. According to estimates, the Chennai office has a capacity of around 55,000 employees.

Following the acquisition, the site has the potential for expansion, with the possibility of developing up to 3 million square feet of additional space. 

Cognizant says that this transaction aligns with its ongoing cost-optimisation strategy, which aims to cut $400 million in expenses over two years as part of its 2023 restructuring plan. Cognizant has strategically shifted towards an asset-light model, focusing on shedding non-core real estate holdings.

The company has been reducing its global office footprint, shedding 11 million square feet of space worldwide. In 2023 alone, Cognizant reduced its Chennai office space by 1.15 million square feet to enhance operational efficiency and also sold its office spaces in Hyderabad of 10 acres, citing similar reasons.

The company had plans to consolidate its Chennai operations into three of its own buildings located at Madras Export Processing Zone (MEPZ), Sholinganallur, and Siruseri, with headquarters relocating to the MEPZ campus near Tambaram by December 2024.

The Thoraipakkam property, historically significant as the site where Cognizant remotely rang the Nasdaq opening bell, has faced challenges including flooding issues due to its proximity to a water body, leading to increased maintenance costs.

This practical consideration further justifies the strategic nature of the sale rather than indicating financial distress. 

A mail sent by AIM to Cogizant seeking clarification on the matter went unanswered.

In August 2024, Cognizant expanded its presence in Indore with its first office, a move set to create over 1,500 jobs with potential growth to 20,000 in the future. However, all seems fine for Cognizant at the moment.

The Employee Problem Remains Real

Despite the positive outlook for selling its office spaces, Cognizant has experienced notable changes in its workforce structure, with headcount declining by 10,700 employees year-over-year to 3,36,800 as of December 31, 2024. 

AI adoption hasn’t been all good news for Indian IT. For the most part, firms integrate generative AI products into their client offerings. At the same time, there have been several reports about how these AI tools are slowly going to reduce the demand for services themselves, making them rethink their business models.

Cognizant has been quite vocal about implementing generative AI into its workflow and also for its clients. This might indicate why there is a decline in headcount

Another interesting aspect is that Cognizant employees in India will have to wait until August 2025 for their next salary hike, marking the second consecutive year of delays. Even last year, the company postponed increments, eventually providing only a 5% raise in August.

Meanwhile, eligible employees have began receiving their bonuses from March 10, according to an internal memo.

Reports suggest the delay is part of an effort to manage high attrition rates. Voluntary attrition at Cognizant climbed to approximately 16% in the last fiscal year, up from 13.8% for the period ending December 31, 2023.

This reduction in headcount included a sequential decrease of 3,300 employees in the final quarter of 2024. Simultaneously, the company’s attrition rate has increased to 15.9% on a trailing twelve-month basis, representing a rise of 2.1 percentage points.

Leadership Remains Confident

Despite these reductions, company leadership has expressed confidence about future hiring. CFO Jatin Dalal revealed, “As we look forward, we feel that we will add headcount as we need to grow and as we grow during the course of 2025.”

Notably, CEO Ravi Kumar S highlighted a positive trend in talent attraction, highlighting that 13,000 former employees had returned to Cognizant, with an additional 10,000 expressing interest in rejoining. 

This suggests the company is adjusting its workforce strategically rather than implementing crisis-driven cuts. The company’s utilisation rate declined by 2 percentage points to 82%, though management emphasised that utilisation improvements remained strong throughout 2024.

Speaking of utilisation, a former Cognizant employee shared their experience of spending over two years at the company with little to no real work. After onboarding and four months of Java Spring Boot training, they waited two months for a project assignment. 

When placed, they, along with two other freshers, were told to wait another three months until a vacancy opened. Even after receiving knowledge transfers (KTs), they were overlooked for the role and continued waiting.

Placed on the bench, they struggled to find another project amid intense competition. HR advised them to train in Apache Camel, which led to another two months of unproductive learning. Forced to work from the office five days a week without their personal laptop, they were unable to prepare for external opportunities. 

Despite a year of “experience”, actual work remained elusive. A senior eventually gave them minor tasks, but within two months, they were relieved from the project due to cost-cutting amidst the time of severe layoffs.

Five months later, they were terminated with three months of severance pay.

Financials Remain Sceptical

Cognizant reported full-year 2024 revenue of $19.736 billion, representing a 1.98% increase compared to the previous year. 

This growth reverses the slight downward trend in 2023, when the company reported revenue of $19.353 billion, a marginal 0.39% decline from 2022 and a 4.9% revenue growth in 2022. 

For FY21, the company reported total revenue of $18.5 billion, with 11.1% year-on-year growth, with business momentum in the digital business after a relatively flat performance in 2020, which had seen a slight decline of 0.78% compared to 2019.

This has been the case for most of the Indian IT firms as most of them reported single digit growth over the last few quarters.

Despite this, Cognizant has provided a positive outlook for 2025, forecasting revenue growth of 3.5% to 6.0% in constant currency terms, with revenue expected to reach between $20.3 billion and $20.8 billion. 

For the first quarter of 2025 specifically, the company expects revenue between $5.0 billion and $5.1 billion, representing growth of 5.6% to 7.1% year-over-year.

The numbers, however, continue to be single digit.

Based on the available evidence, Cognizant appears to be a company in transition rather than in trouble, just like any other Indian IT firm. While workforce reductions and real estate divestitures might initially raise concerns, these actions align with a deliberate strategy to optimise operations, reduce costs, and position the company for future growth.

The upcoming Investor Day on March 26 will likely provide further clarity on the company’s strategic direction and growth plans. 

]]>
From the US Navy and Intel to Lenovo, Doug Fisher’s Mission to Secure AI https://analyticsindiamag.com/ai-features/from-the-us-navy-and-intel-to-lenovo-doug-fishers-mission-to-secure-ai/ Tue, 11 Mar 2025 04:30:00 +0000 https://analyticsindiamag.com/?p=10165762 “Security has to start with the people, because over 80% of all vulnerabilities usually come through your employees.”]]>

With cybersecurity threats evolving and finding more sophisticated ways to attack, AI is emerging as a critical tool in enhancing cyber defense mechanisms. However, the very AI designed to protect is also fueling next-generation cyberattacks, creating a complex paradox where AI serves both as a weapon and a shield in the battle for cybersecurity.

While threat actors are using AI to automate phishing attacks, generate deepfakes, and bypass traditional security measures, companies like Lenovo are using AI to predict, detect, and neutralise these threats before they cause harm.

“AI is enhancing cyber threats, making phishing and hacking more sophisticated. We have to assume threats will evolve constantly and prepare accordingly,” said Doug Fisher, senior vice president and chief security and AI officer at Lenovo, in an exclusive interaction with AIM at the sidelines of the recent Lenovo Tech World 2025 India Edition. 

Sophisticated AI Threats 

AI-powered phishing creates messages almost indistinguishable from human communication. “Whether it starts with better social media attacks, scanning for vulnerabilities, or even evolving into fully autonomous cyberattacks, AI will certainly help with a lot of that,” Fisher warned. 

The rise of AI-driven threats forces organisations to rethink their security strategies, as traditional rule-based security measures are no longer sufficient. To counter AI-powered attacks, companies are adopting AI-driven cybersecurity solutions that use machine learning, behavioral analysis, and predictive threat detection. 

Notably, AI enables real-time monitoring, allowing organisations to respond to threats before they escalate. Lenovo, for instance, is actively integrating AI into its cybersecurity framework. The company partners with SentinelOne and other AI-based security firms to develop proactive threat detection systems. AI tools continuously scan Lenovo’s infrastructure, identifying anomalies that indicate potential breaches. 

“AI doesn’t just detect threats, it predicts them. We are using AI to neutralise threats before they reach our systems,” Fisher further explained.

AI Governance and Compliance

While AI is reshaping cybersecurity, organisations are also navigating the complexities of AI governance and compliance. Global regulations around AI security are still evolving, making it difficult for companies to standardise their approaches. Lenovo, operating in 180 markets, faces the challenge of aligning its AI security measures with diverse regulatory requirements.

“Regulations are often vague, making compliance interpretation difficult. We take a risk-based approach and help governments refine security policies,” Fisher said. To address this, Lenovo collaborates with policymakers and regulatory bodies, ensuring its AI-driven cybersecurity measures align with emerging global standards.

Besides, Lenovo also takes a human-centric approach to cybersecurity that starts with their employees. Fisher believes every employee plays a role in safeguarding the company’s digital infrastructure. 

“Every Lenovo employee, from the CEO to new hires, must complete security training. If they fail, they lose network access,” Fisher revealed. Employees are trained to recognise phishing attempts, spot deepfakes, and report suspicious activities. Their goal is to make security second nature for employees rather than an afterthought. If an employee, regardless of their position, fails to take the test, extreme actions, such as termination, may follow.

“It (security) has got to start with the people, because over 80% of all vulnerabilities usually come through your employees,” Fisher said. 

Doug Fisher at Lenovo Tech World 2025 India Edition. Source: AIM

Personal Approach to AI Security

Fisher moved into his expanded role of leading Lenovo’s AI governance last year, taking on the responsibility of heading Lenovo’s AI policy while working alongside the Responsible AI Committee, which was established in 2022. Notably, Fisher spent over 23 years at Intel in multiple leadership positions. 

“One thing that we did a lot at Intel was constructive confrontation. I had to modify how I approached that a little bit because of the cultural differences. But it doesn’t change the core of what I do as a leader, which is to confront issues, not people, not attack ideas, not a person. I think that helped shape who I was,” Fisher added. 

Moreover, Fisher was also part of the US Navy. “The discipline and things I learned early on as a 17-year-old to run into the military, I think shaped me from a leadership perspective. Then Intel refined it.”

His approach is grounded in discipline, risk assessment, and proactive threat management. “Only the paranoid survive,” he said, echoing the famous words of former Intel CEO Andrew Grove.

At Lenovo, Fisher has worked to instill a mindset where security is not just a department’s responsibility but a company-wide mission. His experience in both hardware and software security allows him to bridge gaps between AI innovation and cybersecurity governance. 

“The rate at which I get things done is incredible at Lenovo because I used the base training I had and I built it on collaboration,” he concluded. 

]]>
How Longevity India is Riding the AI Health Revolution https://analyticsindiamag.com/ai-features/how-longevity-india-is-riding-the-ai-health-revolution/ Mon, 10 Mar 2025 13:48:24 +0000 https://analyticsindiamag.com/?p=10165766 These models can interpret complex medical data, genomic sequences, and cellular interactions, making AI a powerful tool in predictive medicine.]]>

In the far West, American entrepreneur Bryan Johnson is finding unfathomable ways to prevent himself from dying. He has even built a community called ‘Don’t Die’! While immortality remains out of reach, humans have long pursued ways to extend the lifespan of our race.

In India, we are integrating technology into healthcare to transition from reactive to predictive AI-driven health solutions.

At RISE – Longevity India Conference 2025 that is underway in IISc, Bengaluru, Accel partner and Longevity India co-founder Prashanth Prakash outlined how AI and systems biology are transforming diagnostics, creating a healthcare model focused on prevention rather than cure.

AI for Predictive Health 

Prashanth Prakash at RISE for Healthy Aging Conference. Credit: AIM

Prakash, who supports Biopeak, a longevity clinic focusing on precision diagnostics and AI-driven insights for health management, highlighted how AI is bridging the gap between systems biology and clinical applications. 

Unlike traditional medical diagnostics that rely on isolated tests, AI-driven models analyse molecular pathways, genetic markers, and large-scale health data to predict diseases before symptoms appear.

He stressed that India is uniquely positioned to build a next-gen AI-driven health system, bypassing legacy constraints that have slowed down western healthcare. 

“We don’t have a lot of healthcare infrastructure, which means we have the luxury to engineer something new without being compromised by all kinds of insurance and other entities,” Prakash said at the conference. 

He also explained how AI is unlocking new possibilities. “The problem with the current system, which of course all of us are very familiar with, is that it’s slightly more partitioned and siloed,” he remarked.  

Prakash noted that the real connection is with systems biology, which has matured over time and is now being brought into the mainstream by AI.

He envisions AI playing a key role in quantitative language models that go beyond text-based data processing. These models can interpret complex medical data, genomic sequences, and cellular interactions, making AI a powerful tool in predictive medicine.

“It’s probably generative AI, but I think it’s a web of complex AI systems. You need classical reasoning AI systems, generative AI, and quantitative language models, not just systems that can deal with English, but those that can deal with more complex medical data.”

The shift towards AI-driven diagnostics is already happening through Biopeak. Instead of relying on conventional blood tests, Biopeak leverages AI to uncover early indicators of chronic diseases that might otherwise go undetected. 

“What Biopeak is doing is again very cutting edge because in conscious medicine, there are things that you can do in the US, and there are things that you can do in Singapore, but I think there is a more opportunity here in India to find the standards for conscious medicine,” he said.

Government and Research Support

Institutions such as IISc have taken steps to integrate AI into longevity research. IISc’s ICMR-backed Center for Advanced Research is focusing on computational models for aging and predictive health analytics.

Govindan Rangarajan, director of IISc, reinforced this interdisciplinary approach. “The Center for Advanced Research will focus on healthy aging and also look at all the models for modeling aging and aging axis etc. 

“It involves five departments—besides the biology department, it also includes computer science and materials engineering,” he said at the conference. 

Karnataka’s health minister Dinesh Gundu Rao said, “There is so much happening in the fields of medical science and bioscience on the cellular and molecular levels. It’s not just limited to increasing your life span and reducing or slowing down aging, but also reversing aging.” 

Interestingly, traditional health practices such as Ayurveda are also being studied with respect to biology and genomics to understand healthy aging, as highlighted by Vaidya Rajesh Kotecha, secretary of the Ayush ministry. 

“From the Ayurveda perspective, it is interesting that there is a whole science that talks about the aging population and how to provide healthy aging or quality health for the aging population,” said Kotecha. 

With a number of initiatives driving longevity research, the country is better placed to achieve breakthroughs as opposed to the West where AI adoption in healthcare is hindered by regulatory constraints. Predictive health models can, in fact, become a norm rather than an exception, which will all be powered by tech. 

“Computer science will be the glue that will bind everything together,” Prakash concluded. 

]]>
Manus is a Wrapper of Anthropic’s Claude, and It’s Okay https://analyticsindiamag.com/ai-features/manus-is-a-wrapper-of-anthropics-claude-and-its-okay/ Mon, 10 Mar 2025 09:41:07 +0000 https://analyticsindiamag.com/?p=10165743 “Manus didn’t just slap an API on a model. They built an autonomous system that can execute deep research, deep thinking, and multi-step tasks in a way that no other AI have.”]]>

Over the last few days, the AI ecosystem has been struck by a familiar sense of déjà vu, echoing the DeepSeek phenomenon, as a new Chinese startup enters the fast-evolving territory of AI agents. Manus, based out of Shenzen in China, has built what they call a ‘general purpose AI agent’. 

The general agent can plan, execute and deliver complete results autonomously while browsing websites in real time, processing and generating multiple data types. It also uses multiple tools to deliver results. 

Despite Manus being available invite-only, its capabilities blew up quickly. Deedy Das, principal at Menlo Ventures said, “Manus, the new AI product that everyone’s talking about, is worth the hype. This is the AI agent we were promised.” He highlighted that the agent could complete two weeks worth of professional work in around an hour. 

Andrew Wilkinson, co-founder of technology holding company Tiny, said, “I feel like I just time travelled six months into the future.” He even went on to say he got Manus to build and replace a software his company currently spends $6,000 annually on.

The company also showcased various capabilities such as creating detailed itineraries, in-depth data analysis of stocks and businesses, research reports on various topics, designing games, interactive educational courses, etc. Users are also calling it a combination of deep research tools, autonomous operator and computer use functionality, and a coding agent equipped with memory.

Besides Manus’ agentic “mind-blowing capabilities”, the platform has also garnered praise for its overall user experience (UX). “The UX is what so many others promised, but this time it just works,” Victor Mustar, head of product at Hugging Face, said. Besides, Manus also necessitates human work to grant various permissions and approvals. 

Manus also evaluated the agent in the GAIA benchmark, which tests general AI assistants on solving real-world problems. As per the results, Manus outperformed OpenAI’s Deep Research. 

Source: Manus AI

Manus ‘Deserves Respect’ Even if it is a Wrapper 

However, a few days later, X users discovered that Manus was running on top of Anthropic’s Claude Sonnet model, along with many other tools like Browser Use. Some users were quick to express their disappointment. As a result, some even say that Manus has no ‘moat’, or advantage in the market to begin with. 

To achieve its capabilities, Manus is a ‘wrapper’ of the best AI models in the ecosystem. This practice is associated with a strange negative connotation on social media. At the end of the day, Manus has been successful in building a well-designed interface to leverage the agentic capabilities of a foundational AI model.

Aidan McLaughlin, a professional at OpenAI, mentioned on X, that he doesn’t care about the fact that it is a wrapper. “If it created value, it deserves my respect. Care about capabilities, not architecture.” 

Besides, preliminary reviews of Manus also underscore the power of the current AI models today; abilities even the labs making them haven’t been able to unleash

“Manus didn’t just slap an API on a model. They built an autonomous system that can execute deep research, deep thinking, and multi-step tasks in a way that no other AI have,” said Richardson Dackam, founder at GitGlance.co. 

Moreover, if Manus was built on top of existing models from the United States, why would they not be able to ship these capabilities themselves? “I assume every US lab has these capabilities or better behind the scenes and isn’t shipping them due to risk aversion, some of which comes from regulatory risk,” revealed Dean W Ball, an AI researcher. 

However, on the brighter side, Manus is built on top of existing LLMs. This indicates that its capabilities can be replicated. This led to a wave of anticipation among several users on X, many of whom hoped to see an open source version. 

It seems these wishes were granted sooner than expected. A few developers on GitHub have already developed an open source alternative for Manus called ‘OpenManus’. This project is now available on GitHub. 

However, Manus has received its fair share of criticism as well. Users reported that Manus took an excessive time to perform the task and failed to finish them altogether. Derya Unutmaz, a biomedical scientist, compared it to OpenAI’s Deep Research and revealed that while the former finished the task in 15 minutes, Manus AI failed after 50 minutes at step 18/20. 

Simon Smith, EVP of generative AI at Klick Health, attributed these issues to the fact that Manus’ underlying model may not be as good as OpenAI’s Deep Research. Further, he added that because Manus is using multiple models underneath, it might take longer than Deep Research to produce a full report. 

Another user pointed out that Manus gets stuck on web searches, “breaks in between” due to context issues on code-based tasks, and was generally slow.

Some users also critiqued Manus’ invite-only approach for gaining access, and how they were handed out to influencers on social media to churn up the hype.

Granted that Manus is still in its early stage, it will likely refine its capabilities. However, one critical question remains. How long until OpenAI, Anthropic or even Google step up, and ship a more accessible version of what Manus can do? 

]]>
Google DeepMind’s Thinking Models: What to Expect https://analyticsindiamag.com/ai-features/google-deepminds-thinking-models-what-to-expect/ Mon, 10 Mar 2025 09:30:00 +0000 https://analyticsindiamag.com/?p=10165736 Google DeepMind’s principal research scientist sheds light on the development of thinking models and what he thinks about them.]]>

Google DeepMind has been making significant strides in developing ‘thinking models’—a new class of AI models that can reason, plan, and solve complex problems more effectively than previous models. 

In a podcast on YouTube channel Google for Developers, Jack Rae, a principal research scientist at Google DeepMind, spoke at length about how Google DeepMind’s thinking models are being built, giving us a glimpse of what’s to come. 

“The key intuition of what a reasoning model is about is that it is going to try and compose knowledge to a specific scenario that may be novel or unseen,” he told Logan Kilpatrick, senior product manager at Google DeepMind.

How Is It Going So Far?

Google’s efforts have yielded rapid advancements in the capabilities of their thinking models, with significant improvements in their performance on tasks such as math, coding, and multimodal reasoning.

Google DeepMind recently conducted a study that showed how AI can think deeper using a ‘Mind Evolution’ technique. “What we’re seeing is truly a new paradigm,” said Rae. “We’re finding multiple avenues of being able to spend more compute on inference time, like, during the response.”

The company is already seeing the fruits of its labour. It has released two experimental versions of the Gemini Flash Thinking model, which are available for free on AI Studio. Addressing the product launches, Rae highlighted that the thinking models will see the usage of more tools from within Gemini in the near future.

“The model is going to be using more and more tools during thinking in order to really get to the core essence of the problem that it needs to solve,” Rae predicted. He also provided examples of integration of the model with Google Search and Maps. Rae thinks the models will become more agentic because of that, and that will be an important aspect even when it is thinking.

Rae also believes that the industry does not need any research breakthroughs to achieve the possibility of a model having “infinite context”. He emphasises that the right ingredients are available, we only need engineering to make it a reality.

Feedback in the Loop of its Development

Rae told Kilpatrick that he was excited about the model being launched in an experimental phase as the user feedback would help them learn more about the thinking model’s capabilities.

To give an example of how feedback is helping shape up the development, Rae recalled a time when he did not realise that 32k context support would be limited to people, until he reached out to the academics that were using the reasoning model as part of their research.

Similarly, he shared another instance where an internal code change was required when a user tried to switch from Gemini Flash to Flash Thinking models, and then he worked on fixing the same.


Speaking about the timeline for the development of the thinking models, he said that they started working on it in October 2024 and were ready to ship the model in two years for developer feedback. With the feedback received over the holiday period in December end, they released an update to the model in January 2025.

While Rae did not officially mention it, keen observers on the internet have speculated that new Gemini models based on non-experimental thinking models should be released on March 12.

The Future of Google DeepMind’s Thinking Models

“We’re looking forward to a bunch of very exciting future releases,” Rae teased. The company is actively gathering feedback from developers and working towards a general availability (GA) release of the model.

“It’s become clear that people want to build on this model and have it as a stable foundation,” Rae acknowledged. “And GA is just essential for that. So that’s something on the roadmap for sure.”

Google DeepMind plans to continue improving its capabilities, exploring new product experiences, and enabling them to use tools like code execution and search during the thinking process.

Thinking models are also expected to play a crucial role in the development of AI agents, which can interact with the world and perform tasks autonomously. “There are two things that I think are very important for useful agentic capability that reasoning will give. One is reliability… the other is complex capability.”

As Google DeepMind continues to push the boundaries of AI, thinking models are likely to become the cornerstone of future AI systems, enabling them to solve increasingly complex problems and interact with the world in more meaningful ways.

]]>
This Developer Ran the 671 Billion Parameter DeepSeek-R1 Model—Without a GPU https://analyticsindiamag.com/ai-features/this-developer-ran-the-671-billion-parameter-deepseek-r1-model-without-a-gpu/ Mon, 10 Mar 2025 08:39:50 +0000 https://analyticsindiamag.com/?p=10165735 No, it wasn’t a distilled version, but a quantised variant with 2.51-bits-per parameter model. ]]>

While companies like DeepSeek, Alibaba, and Meta host their open-weight models on cloud-based chatbots, the true value lies in the ability to run these models locally. This approach eliminates the reliance on cloud infrastructure.

Running these models not only alleviates privacy concerns and censorship restrictions but also lets developers fine-tune these models and tailor them to specific use cases.

While the best results and outputs would necessitate a large model trained on a large corpus of data, it would also demand high computing power and expensive hardware rigs to deploy locally. 

Notably, John Leimgruber, a software engineer from the United States with two years of experience in engineering, managed to bypass the need for expensive GPUs by hosting the massive, 671 billion parameter DeepSeek-R1 model. He ran a quantised version of the model on a fast NVMe SSD.

In a conversation with AIM, Leimgruber explained what made it possible. 

MLA, MoE, and Native 8-Bit Weights for the Win 

Leimgruber used a quantised, non-distilled version of the model, developed by Unsloth.ai—a 2.51 bits-per-parameter model, which he said retained good quality despite being compressed to just 212 GB. 

However, the model is natively built on 8 bits, which makes it quite efficient by default. 

“For starters, each of those 671B parameters is just 8 bits for a total of 671 GB file size. Compare that to Llama-3.1-405B which requires 16 bits per parameter for a total of 810 GB file size,” Leimgruber added. 

Leimgruber ran the model after disabling his NVIDIA RTX 3090 Ti GPU on his gaming rig, with 96 GB of RAM, and 24 GB of VRAM. 

He explained that the “secret trick” is to load only the KV cache into RAM, while allowing llama.cpp to handle the model files using its default behaviour—memory-mapping (mmap) them directly from a fast NVMe SSD. “The rest of your system RAM acts as disk cache for the active weights,” he added. 

This means that most of the model runs directly from the NVMe SSD, with the system memory speeding up the access to the model. 

Leimgruber also clarified that it won’t affect the SSD’s read or write cycle lifetime, and the model is accessed via memory mapping and not swap memory, where data is frequently written and erased on disk. 

He could run the model with a little over two tokens per second. To put that in perspective, Microsoft recently revealed that the distilled version of the DeepSeek-R1 14B model gave eight tokens per second. In contrast, AI models deployed on the cloud like ChatGPT or Claude can output 50-60 tokens per second. 

However, he also suggested that having a single GPU with 16-24 GB of memory is still better than not having a GPU at all. “This is because the attention layers and kv-cache calculations can be run on the GPU to take advantage of optimisations like CUDA (Compute Unified Device Architecture) graphs while the bulk of the models MoE (Mixture of Experts) weights run in system RAM,” he said. 

Leimgruber provided detailed benchmarks and examples of generations in a GitHub post

This is largely possible due to DeepSeek’s architecture, besides its native 8-bit weights. DeepSeek’s MoE architecture means that it only has 37B parameters active at a time while generating tokens. 

“This is much more efficient than a traditional dense model like GPT-3 or Llama-3.1-405B because each token requires less computations,” Leimgruber said. 

Moreover, the Multi-Head Latent Attention (MLA) allows for longer context chats, as it performs calculations on a compressed latent space instead of fully uncompressed context like most other LLMs. 

All things considered, the best way for most home users to run a model locally on a desktop without a GPU is to take advantage of applications like Ollama and much smaller, distilled versions of DeepSeek-R1. In particular, the distilled variant of Alibaba’s Qwen-2.5-32B model, which is fine-tuned on the DeepSeek-R1 model to produce reasoning-like outputs. A user recently published a tutorial on GitHub on how to deploy this model locally using Inferless. 

For even smaller versions, LinuxConfig.Org posted a tutorial to deploy the 7B version of the DeepSeek-R1 without a GPU. Similarly, DataCamp published a detailed tutorial on deploying the model on Windows, and Mac machines using ollama. 

]]>
IIT Professor Hints at AI Tools Being Allowed in JEE Exams Someday https://analyticsindiamag.com/ai-features/iit-professor-hints-at-ai-tools-being-allowed-in-jee-exams-someday/ Mon, 10 Mar 2025 06:05:43 +0000 https://analyticsindiamag.com/?p=10165719 When asked about the possibility of AI tutors replacing human teachers, he said that teaching is one of the professions least likely to be replaced by AI. ]]>

Contrary to concerns that an overreliance on AI might dull critical thinking, Professor Balaraman Ravindran from IIT Madras believes these tools are transformative and encourages aspiring IITians and students to embrace AI resources such as ChatGPT and Perplexity AI. 

“I do not buy the argument that AI tools make people dumber. They train them to solve problems differently,” Dr. Ravindran said.  

A recent Microsoft study said that while generative AI (GenAI) tools can significantly reduce workload, they also risk diminishing critical thinking skills among knowledge workers. However, Ravindran offered a different perspective, suggesting that using AI like a calculator could help tackle larger challenges more effectively than relying solely on mental calculations.

Dr. Ravindran brings over two decades of teaching experience to his role in the Department of Computer Science and Engineering at the Indian Institute of Technology Madras (IITM), where he has been a faculty member since 2004. He, however, acknowledged that AI has put teachers in a tricky position, requiring them to adapt and change their teaching methods. 

Redefining Teaching for an AI Era

Ravindran acknowledged the growing trend of students using AI tools to complete their assignments. To counter this, one of his colleagues gave students a slightly trickier question, requiring them to submit the prompts they used to generate their answers.

Besides, the department shifted towards in-class assignments during tutorial sessions, with Wi-Fi disabled to prevent reliance on AI tools. These assignments use simpler questions that test basic concepts, allowing students to refer to their notes. This blurs the line between exams and assignments, creating a continuous learning experience. 

Ravindran has taught several NPTEL courses on reinforcement learning and machine learning, which are highly regarded for their comprehensive coverage and clarity. He said his courses are now dubbed into regional languages to connect with students nationwide. This is being done in collaboration with AI4Bharat. 

This is a welcome change, as students in the hinterlands can now access good resources otherwise. He believes that AI democratises access to advanced skills (e.g., high-level math or critical thinking), making them available to a broader population, not just prodigies. 

When asked about the possibility of AI tutors replacing human teachers, he said that teaching is one of the professions least likely to be replaced by AI. “The empathy and psychological support that a teacher provides to students are difficult for AI to replicate.” While this may hold true for younger students, the situation could be different in higher education.

Citing a personal example, Ravindran stated that with over 180 students in his class, it may be challenging to remember each student’s name or provide personalised instruction, which AI could assist with. 

He stressed that teachers must add more value to the class, going beyond the fundamentals, as the concepts will not change overnight. 

Ravindran shares interesting stories and anecdotes while teaching reinforcement learning. Now, he says AI can be trained to deliver personalised teaching on the same topic to each student differently depending on their understanding capacity. 

Will AI Impact Critical Thinking?

Ravindran acknowledged that technology can lead to a decline in certain skills (e.g., mental arithmetic) but argued that this does not equate to reduced intelligence. 

His views resonate with those of Google chief Sundar Pichai, who recently recounted a personal anecdote highlighting his initial discomfort with his children’s use of smartphones to learn math. Pitchai remarked, “I grew up doing math using logarithmic tables, and I was uncomfortable watching my kids learn math with smartphones. They’ve turned out just fine.”

When asked whether IIT would adapt its entrance exams to accommodate AI tools, he said that competitive exams like JEE and GRE are unlikely to change in the near future. Yet he acknowledged that change is inevitable and necessary.

“I do not think we will react fast enough to change JEE, GRE, or other competitive exams,” Ravindran said, adding that changes are inevitable and may not come immediately. 

Govt Should Act on Revising Curriculum

Discussing India’s education system, Ravindran believes the current curriculum is not adequately prepared for the next generation of students who will soon enter the workforce. 

“The current curriculum certainly is not ready for AI. I have been saying that regardless of what field of work students are going into, they will be able to do it better with judicious use of AI tools,” he said. 

Students should be trained not just on the fundamentals of the subject but also how they can use AI to improve their efficiency and enhance their capacity, he opined. “Teaching students a single AI module just for formality is not enough,” Ravindran remarked. 

Meanwhile, drawing on his expertise in software engineering, Ravindran argued that education should shift from merely teaching coding to equipping students with skills to design code structures and translate user requirements into functional program modules.

Much like in CAD/CAM (Computer-Aided Design and Manufacturing), Ravindran noted that the emphasis should be on design principles and verification rather than merely operating AI tools.  “I do not think we are at a point where we can completely rely on the outputs of these AI systems,” he said. 

Echoing similar views, BITS Pilani vice chancellor V Ramgopal Rao, in an earlier interaction, told AIM that while AI is being introduced in many universities, it often lacks depth. Topics like deep learning, reinforcement learning, and advanced AI applications are optional or lightly covered.

“There is limited focus on hands-on projects or exposure to real-world problems, which are essential for AI development,” Rao said. “AI is a fast-evolving field, and the lack of emphasis on research and innovation within the curriculum limits students’ ability to contribute to global advancements.”

]]>
Bengaluru Might Become the Biggest Victim of AI https://analyticsindiamag.com/ai-features/bengaluru-might-become-the-biggest-victim-of-ai/ Sun, 09 Mar 2025 04:30:00 +0000 https://analyticsindiamag.com/?p=10165463 While developers are feeling the heat right now, AI’s impact won’t be limited to tech jobs. ]]>

For the past few decades, Bengaluru has been the hub of India’s tech industry, drawing people from all corners of the country into the city. The city’s thriving IT, GCC, and startup ecosystem makes it the Silicon Valley of the East. The rise of AI has only accelerated this, as AI engineers build products and startups from the comfort of their apartments or coffee shops. Some even host investor meetups at tea stalls and successfully raise funding.

However, everything comes at a cost.

What was once referred to as the outsourcing hub of the world, with US companies often referring to their job losses as “getting Bangalored”, is now struggling with the rapid rise of AI-driven automation. Low-skill, repetitive IT jobs are being replaced by automation at an alarming rate, prompting even IT giants like HCLTech and TCS to reevaluate their 30-year-old business model to cope with the influence of AI. 

The software development and services industry is undergoing a transformation where AI can now generate, debug, and optimise code with increasing efficiency. This is much of what the country’s favourite tech hub offers. 

Many Reddit users believe that the biggest victims of AI will be low-salaried IT employees, particularly those in service-based roles such as testing, support, and entry-level development. “The way things are moving, there’s a high chance of mass job losses in Bengaluru in the coming weeks to months,” a user predicted.

The first signs of AI disruption are already here

Companies are cutting jobs, automation is replacing entry-level developers, and outsourcing—the backbone of Bengaluru’s IT industry—is under threat.

According to Nasscom, over 1.4 million people were employed in the most vulnerable operations in 2021, with a third of them working in call centres. AI has always been expected to affect call centre and BPO jobs the most, and this shift has already begun in 2022. 

“AI and automation…that’s the reality. AI will play a role everywhere…in the areas of medical equipment, smartphones, chipmaking and so on. Robots will replace human beings,” Karnataka industries minister MB Patil told DH, adding that there were even robot masseuses. Patil, however, pointed out that it was “too early” to speculate how AI would impact jobs.

Previously, IT firms charged clients based on workforce size, but many now link fees to outcomes instead, becoming a service-as-a-software. The global demand for human workers is set to decline even further with predictions around AI agents getting into the workforce

In the year ending March, the industry added just 60,000 jobs—the lowest increase in over a decade. Meanwhile, major IT firms like TCS, Infosys, and Wipro saw their combined workforce shrink by more than 60,000, as the focus remained on AI and automation.

While companies largely attribute the hiring slowdown to post-pandemic overexpansion rather than automation, they expect recruitment to improve this year. Despite concerns, industry leaders remain optimistic that AI will create new business opportunities, even as it disrupts existing operations.

“AI is definitely reshaping Bengaluru’s job market, but I wouldn’t say we’re looking at mass job losses or an economic downturn,” Krishna Vij, VP of TeamLease Digital told AIM. “Yes, some low-level coding roles might get automated, but at the same time, we’re seeing a surge in demand for AI engineering, machine learning, and cloud expertise.”

Vij added that Bengaluru has always adapted to tech shifts and moved from IT outsourcing to a global innovation hub, and now, AI is just the next phase. “Companies aren’t just cutting jobs; they’re focusing on upskilling their workforce.”

AI tools have significantly reduced the need for junior engineers who once handled mundane coding tasks. Large corporations, once known for hiring tens of thousands of software engineers annually, are now re-evaluating their workforce needs.

But this is just the beginning. While developers are feeling the heat, AI’s impact won’t be limited to tech jobs. Automation is creeping into finance, operations, accounting, and even legal services. The city, which once prided itself on being India’s brain trust, may soon find itself in an identity crisis.

The Migration Boom Might Get Affected

For years, Bengaluru has been a magnet for talent from across India. Though it has been iterated several times that AI will create new types of jobs resulting in a net-zero job displacement over time, much like how computers and the Internet did, the impact cannot be completely zero. 

Sebastian Thrun, Google X cofounder, articulated this dichotomy at a recent summit, noting that while approximately 60% of current jobs may disappear, far more new jobs will emerge as a result of AI and other technologies. Thrun emphasised that AI’s rise would lead to a shift in job types rather than permanent employment reduction.

Young graduates from smaller towns and cities dreamed of moving here, lured by the promise of high-paying tech jobs. This influx of workers fueled an entire economy—rental housing, hostels, PG accommodations, food joints, and local businesses.

But what happens when these jobs dry up?

With AI automating thousands of roles, the need for massive tech hiring is shrinking. The city’s famed “tech migration” could slow to a trickle, leaving landlords, small businesses, and local economies in trouble. PG owners who once charged exorbitant rents for cramped accommodations may soon find their properties sitting empty. Restaurants, tea stalls, and street vendors who catered to Bengaluru’s massive IT workforce may see dwindling footfalls.

“I don’t see a mass exodus happening. Bengaluru still attracts top talent, startups, and global firms. If history has taught us anything, it’s that every tech disruption, whether it was automation or cloud, has led to transformation and not decline,” Vij said. 

Funnily enough, in the short term, at least, we might see Bengaluru’s traffic improving due to fewer people commuting to work.

Bengaluru’s real estate market has long been inflated due to the sheer number of tech workers willing to pay high rents. If layoffs continue and hiring slows down, landlords, especially those around Outer Ring Road (ORR), may struggle to find tenants. The once-booming rental market could see a sharp correction.

From chai stalls near tech parks to high-end bars in Indiranagar, Bengaluru’s economy thrives on disposable income from IT workers. A slowdown in hiring and rising job insecurity could dent consumer spending, hitting local businesses hard.

While Bengaluru’s GCCs have been resilient and are also increasing, AI is forcing them to rethink their workforce structures. If AI-powered automation reduces their need for human talent, even these global giants may scale down their Indian operations over time.

The city has survived technological shifts before. Back when automation replaced data entry jobs, Bengaluru managed to adapt. When cloud computing disrupted IT services, Bengaluru built expertise around it. However, AI is a different beast—its impact is broader, faster, and harder to predict.

Academia has noted that AI is not merely a technological enhancement but “a pivotal factor driving the IT sector’s evolution in Bengaluru”. This perspective suggests that the city could emerge strengthened rather than diminished through strategic adaptation.

To stay relevant, Bengaluru’s workforce needs to evolve. Experts suggest that upskilling in AI, machine learning, and automation-related fields is the only way forward. However, not everyone will be able to make the leap, especially those in roles that are already becoming obsolete.

]]>