AI Trends – Analytics India Magazine https://analyticsindiamag.com AIM - News and Insights on AI, GCC, IT, and Tech Fri, 07 Mar 2025 11:55:28 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2025/02/cropped-AIM-Favicon-32x32.png AI Trends – Analytics India Magazine https://analyticsindiamag.com 32 32 Meet the Top 10 Women Driving Change in GCCs https://analyticsindiamag.com/ai-trends/meet-the-top-10-women-driving-change-in-gccs/ Thu, 20 Feb 2025 05:25:30 +0000 https://analyticsindiamag.com/?p=10164178 These women spearhead digital transformation, champion diversity, and strengthen global strategies, proving that leadership has no gender. ]]>

In the bustling tech hubs and corporate centres of India, a quiet revolution is taking place. A new generation of women leaders is rising – bold, visionary and ready to transform the future of global capability centres (GCCs). These women spearhead digital transformation, champion diversity, and strengthen global strategies, proving that leadership has no gender. 

Meet these exceptional Indian women who are not only leading their organisations but also paving the way for future generations of leaders.

top women in global capability center


1. Lalitha Indrakanti

CEO of Jaguar Land Rover Technology & Business Services India

Lalitha Indrakanti is a seasoned business leader with nearly three decades of global experience in consulting, advisory, IT services, and digital transformations. At JLR Technology & Business Services India, she guides the organisation’s strategic vision across various enterprise functions, including engineering, IT, and supply chain. 

Indrakanti is known for her resilience, driving innovation and fostering inclusive cultures within her organisation. Outside her day job, she is an active member of the wider business community and industry bodies. A former member of the NASSCOM GCC Council and ex-chairperson of its regional council for Karnataka, Indrakanti has consistently served on corporate boards for over a decade.

2. Sirisha Voruganti

CEO & MD of Lloyds Technology Centre India

Under Sirisha Voruganti’s guidance, the Lloyds Technology Centre in Hyderabad has become a hub of innovation. Her deep expertise in IT architecture, data engineering, and fintech, combined with experience in senior technology roles at global companies, helps her foster an environment where new ideas and solutions flourish, driving growth in the tech world.

Previously, she served as the MD and board member at JCPenney India. Notably, she was also the first woman to hold the position of managing director for JPMorgan Chase in Technology in India. Voruganti’s emphasis on diversity and gender equity has transformed the workplace culture, encouraging a more inclusive environment.

3. Uma Ratnam Krishnan

Managing director & CEO of Optum India

Uma Ratnam Krishnan is a key figure in the healthcare technology sector. She leads Optum India’s digital transformation and innovation and has over thirty years of experience in leadership roles across various industries. 

Krishnan started her career as a diplomat in the Indian Foreign Service, which she credits for teaching her about diversity and adaptability. She later transitioned into corporate roles, primarily in the banking sector, working with institutions like ANZ Grindlays Bank and HDFC Bank.

She also served as the co-CEO of Barclays Global Service Centre India. In her current role, she oversees the company’s operations in India, focusing on delivering transformational solutions to global employers and the government to improve the quality of care and help lower costs.

4. Mamatha Madireddy

Managing director & head of HSBC India Global Service Centres

With over 20 years at HSBC, Mamatha Madireddy has been at the heart of its growth and transformation in India. Now, as the MD and head of HSBC India Global Service Centres, she brings a leadership style that values inclusivity, innovation, and excellence. 

Operating from Hyderabad, she continues to shape the future of banking operations. She is also the chair of the NASSCOM Telangana GCC Council and a member of its National GCC Council.

5. Kalavathi GV

Executive director and head of the global development centre at Siemens Healthineers

Kalavathi GV believes that technology has the power to revolutionise healthcare. As the leader of Siemens Healthineers’ global development centre in India, she focuses on innovation that enhances patient care and redefines industry standards.

Before joining Siemens Healthineers, she spent over 14 years in leadership roles at Philips Healthcare, followed by a decade at GE Healthcare. She also has experience in leading global digital transformation efforts at major multinational companies and shaping the future of healthcare through technology and innovation.

6. Anuprita Bhattacharya

Head of Merck IT Centre and IT country head India

Anuprita Bhattacharya is a seasoned IT leader known for her expertise in digital transformation and operational excellence. She champions diversity and inclusion initiatives within Merck’s GCC in India, fostering a collaborative work environment. 

She held various HR roles at Merck Group from 2018 to 2022, including lead HR business partner and HR business partner. She spent nearly a decade at General Motors, starting as an assistant manager in HR and progressing to HR manager and deputy manager roles between 2007 and 2016. 

7. Sreema Nallasivam

CEO of Metro Business Solution Centre 

As CEO of Metro Business Solution Center, Sreema Nallasivam has helped the company grow globally and change the game when it comes to GCCs. She’s a big supporter of women leaders, encouraging empathy, patience, and collaboration in leadership. Nallasivam believes in putting the team first, shifting the focus from individual achievements to working together and driving innovation.

With over 13 years at Metro, Sreema has played a pivotal role in shaping Metro’s GCC, headquartered in Pune. She is also a board member of Metro Global Solution Center.

8. Dhanya Rajeswaran 

Global vice president and country managing director for India, Fluence

Dhanya Rajeswaran serves as the global vice president for Fluence, a leading global provider of energy storage products, services, and optimisation software for renewables and storage. In this capacity, she oversees Fluence’s global innovation centre in Bengaluru, which has become the company’s largest hub. Fluence was established in India in 2022 under her leadership and has rapidly scaled to 400 employees in just over a year.

9. Daisy Chittilapilly

President of Cisco India and SAARC

Daisy Chittilapilly is the president of Cisco’s India and SAARC regions. She took on this role in August 2021, bringing a wealth of experience from her previous positions at Cisco. 

Throughout her career, Chittilapilly has been recognised for her leadership and ability to drive digital transformation across various industries in India, including agriculture, healthcare, and infrastructure. She is also a strong advocate for gender equality in STEM fields, actively addressing the gap between women’s graduation rates and their participation in the workforce. 

10. Madhurima Khandelwal

Vice President at American Express

Madhurima Khandelwal is a highly accomplished professional in the field of analytics and leadership, currently serving as the managing director for the Credit & Fraud Risk (CFR) India Center of Excellence (CoE) at American Express. 

In this role, she leads a team of over 1,700 colleagues, focusing on developing solutions for American Express’s global business. With a career spanning over 18 years at American Express, Khandelwal has held various strategic roles, including head of AI Labs, where she significantly enhanced the company’s machine learning and artificial intelligence capabilities.

]]>
AI Growth in India May Be a ‘Power’ Struggle https://analyticsindiamag.com/ai-trends/ai-growth-in-india-may-be-a-power-struggle/ Fri, 14 Feb 2025 08:55:35 +0000 https://analyticsindiamag.com/?p=10163715 Even if our president chooses not to, the market will drive clean energy forward: Ann Dunkin of the US department of energy.]]>

The rise of AI is putting immense strain on energy systems worldwide. As India emerges as a key player in AI, it must find smart and sustainable ways to meet this growing demand. 

Ann Dunkin, former chief information officer at the US energy department, spoke with AIM on the sidelines of the Invest Karnataka 2025 summit, sharing interesting insights into how India can balance AI growth with sustainable energy solutions.

A Multi-Faceted Energy Strategy

India’s AI future largely depends on a robust and sustainable energy framework. Dunkin believes that the right approach is to leverage all available renewable resources. “Look at India’s assets and where the country can monetise these to get clean power at the lowest cost.”

She spoke about the “All-of-the-above energy policy” that revolves around wind, geothermal, solar, clean hydrogen, and nuclear power. 

Interestingly, the term gained prominence during the 2012 US presidential election, particularly with President Barack Obama’s administration using it to balance fossil fuel development with investments in renewable energy.

While the political shift in the US has lately impacted climate policies, Dunkin remains optimistic about clean energy’s momentum. “Even if our president chooses not to, the market will drive clean energy forward.”

She added that, with abundant sunlight and significant wind energy potential, India is well-positioned to tap into renewable resources to support the power-hungry AI infrastructure.

According to the Economic Survey 2024-2025, India’s data centre market is expected to grow from $4.5 billion in 2023 to $11.6 billion by 2032 at a compound annual growth rate of 10.98%.

With coal generating nearly 75% of India’s electricity, the data centre industry’s reliance on fossil fuels raises sustainability challenges. However, businesses are increasingly turning to renewable energy sources such as solar and wind power. 

Hiranandani Group’s data centre venture, Yotta, plans to secure more than 80% of its energy from renewable sources within the next three to five years. Likewise, Hyderabad-based CtrlS is targeting full reliance on renewable power by 2030.

Meanwhile,  Reliance Group, led by billionaire Mukesh Ambani, is purchasing AI GPUs from NVIDIA to build a new data centre in Jamnagar, Gujarat. It is touted as the world’s largest data centre. With a projected capacity of three gigawatts, the facility will dwarf existing data centres, which currently operate below one gigawatt.

Jamnagar, home to Reliance’s oil refining and petrochemical operations, is key to the company’s renewable energy ambitions. Nearby, a 5,000-acre green energy complex, including solar, wind, and hydrogen energy projects, is under development. 

The Data Centre Dilemma

Major tech companies like AWS and Microsoft are investing heavily in Indian data centres, raising questions about their long-term sustainability. 

Dunkin pointed out that the focus should not just be on expansion but also on optimising energy and water usage. “Investing in data centres is good, but looking at how we’re going to reduce the cost, energy consumption, and water consumption is equally important.”

She added that AI’s energy footprint can be reduced through innovations in hardware and software. She used DeepSeek and other models to illustrate how customers will need less compute to run them. She also noted that smaller language models, which can operate directly on devices, are set to become more common.

Safety First

Beyond energy, Dunkin said that there is a need for AI governance in India. She stressed the importance of data security, regulatory safeguards, and ensuring AI models are built on diverse and unbiased datasets.

“If your model is trained by a bunch of American white men, you’re going to get a model that’s biased towards American white men. We need training data from all backgrounds.”

At the Paris AI Action Summit, Prime Minister Narendra Modi voiced similar concerns about biases in LLMs.  

“We must build quality datasets free from biases, democratise technology and create people-centric applications. We need to address concerns related to cybersecurity, disinformation, and other threats, and must ensure that technology is rooted in many ecosystems for it to be effective and useful,” said Modi. 

It becomes crucial for India to build AI frameworks that reflect its diverse linguistic and cultural landscape.

Agentic Systems 

Looking ahead, Dunkin sees agentic AI as a key productivity driver. However, she also cautioned against data privacy risks, advocating for personal AI models that do not share user data externally. 

Surprisingly, Dunkin doesn’t use much of the AI apps like ChatGPT due to privacy and security concerns. “I don’t use it a lot, in part because I don’t have a private model that I have access to, and so I don’t want my data out there,” she said. 

“Your personal model should live on your device or in your personal cloud. It should fetch external data without sending your private data out.”

This thought aligns with India’s push for digital sovereignty and greater control over data privacy in AI applications.

]]>
AI and India’s Developer Surge Will Spark Innovation in 2025 https://analyticsindiamag.com/ai-trends/ai-and-indias-developer-surge-will-spark-innovation-in-2025/ Wed, 22 Jan 2025 08:30:00 +0000 https://analyticsindiamag.com/?p=10161945 The sheer size of India’s vibrant developer community signals incredible prosperity for 2025—after all, where developers thrive, economic growth follows.]]>

India’s developer community reached new heights in 2024, both in terms of growth and innovation. The number of developers on GitHub in India surpassed 17 million, which is more than the entire population of Bengaluru city, making it one of the fastest-growing communities on the web. Not just that, the contributions to public generative AI projects increased by 95%, while India’s developer community continued to embrace AI-developer tools, setting the stage for even greater breakthroughs in 2025.

With a surging developer community and the large-scale adoption of AI tools, here’s how India’s developer community will make its mark on the world in 2025.

Indian Devs Hold the Power for Large-Scale Transformation

The sheer size of India’s vibrant developer community signals incredible prosperity for 2025—after all, where developers thrive, economic growth follows.

Combined with the power of AI tools like GitHub Copilot, India’s developer community is poised to fuel a fresh wave of homegrown tech multinationals, a new generation of disruptive startups, and an empowered open-source community like never before. These communities will leverage AI developer tools to shape global innovation and create digital solutions that benefit society—much like the impactful work being done by Open Healthcare Network in India.

To turn this vision into reality, GitHub is empowering every Indian developer to unlock the potential of AI-powered software by offering GitHub Copilot Free in VS Code. With the community shipping software up to 55% faster, immense digital and economic progress lies ahead for India.

Imagine a country where each developer, equipped with AI tools like GitHub Copilot, contributes to the nation’s growth by solving problems for their society. They educate people about technology, contribute to open-source digital public goods, and create newer ways to drive large-scale transformation.

It’s a New Era of ABCD: AnyBody Can Develop!

AI has successfully created a bridge between humans and machine languages. This will empower children in India to start programming in their native languages even before learning English in schools.
Be it Hindi, Kannada or Marathi, aspiring developers can now write and understand code—traditionally a complex abstraction layer—in natural language.

With AI-powered tools such as GitHub Copilot, students can learn to code in their own language, using AI as a personal programming assistant, much like a calculator for coding. This will add millions more to India’s rising developer community, cementing its place as the largest developer hub in the world. More importantly, it will ensure that every Indian student can explore STEM careers and express their creativity through code without needing to learn English first.

Digital Diwali: Where Ideas Ignite with the Power of AI

AI is ready to empower anyone to turn their ideas into reality, all in natural language, leading to an explosion of creativity in India and across the globe. This has never been more possible than with GitHub Spark, an AI-powered tool for creating and sharing micro apps (“sparks”) tailored to individual needs and preferences. While still in technical preview, it’s going to build a world where anyone and everyone is empowered to create or adapt software for themselves.

If India continues to nurture its developer community and grow it at scale while simultaneously embracing the transformative power of AI, the nation will not only cement its place as a global AI leader but also extend the economic opportunity of building software to all its people. The promise of this new future is entirely possible—and 2025 will be a pivotal year in paving the way.

]]>
Top 10 Talks from AIM Conferences in 2024 https://analyticsindiamag.com/ai-trends/top-10-talks-from-aim-conferences-in-2024/ Wed, 08 Jan 2025 07:39:50 +0000 https://analyticsindiamag.com/?p=10160938 Former CEO CP Gurnani revealed that Tech Mahindra developed an Indian LLM for local languages and over 37 dialects in just five months with a budget of under $5 million.]]>

AIM organises some of the tech industry’s most impactful conferences to bring together experts and innovators from various fields. These events cover a broad spectrum of topics, from promoting diversity and inclusion in technology to delving into the latest breakthroughs in generative AI. 

Whether you’re a data engineer, AI startup founder, developer, or corporate leader, AIM’s conferences provide important opportunities to learn, connect, and stay at the forefront of the advancing tech landscape.

We have cherry-picked the top 10 talks from the 2024 AIM conferences, which offer exclusive insights into the future of AI, the challenges it presents, and how India is positioning itself as a global leader in the field.

1. ‘Is GenAI for Real’ by Zerodha CTO Kailash Nadh

At Cypher 2024, Zerodha CTO Kailash Nadh expressed his scepticism about the timeline for achieving Artificial General Intelligence (AGI), calling claims of its arrival in two to five years “unrealistic”. He attributed the definition of AGI to Western AI companies and pointed out that it has always been “five years away”. He dismissed such predictions as likely motivated by business or valuation reasons.

Nadh also highlighted the vital role of open-source technology in the generative AI boom. He noted that open-source tools and models often outperform proprietary ones and also mentioned that Zerodha is using large language models (LLMs) to automate certain tasks. On the topic of AI and jobs, Nadh reassured that Zerodha would not allow job losses due to AI. 

2.‘Lessons in Bravery, Integrity & Leadership for Tech Professionals’ by Kiran Bedi 

    At Rising 2024, Kiran Bedi, who became India’s first woman Indian Police Service (IPS) officer, shared her journey and insights on women’s success in male-dominated fields. Bedi explained that her first priority was always her career and making herself self-reliant and self-sufficient. Despite the challenges of working in the IPS, where women were underrepresented and faced discrimination, Bedi said she never questioned her abilities and confidently pursued her goals.

    Discussing the low representation of women in leadership roles, Bedi pointed out that lack of career prioritisation and family support were key barriers. She recalled how her parents supported her through difficult times. 

    Bedi highlighted that women need to plan and manage their personal lives effectively and advised that motherhood and family life should be carefully managed to avoid conflicts with career goals.

    3. ‘India Proves Sam Altman Wrong!’ by Former Tech Mahindra CEO CP Gurnani

      At the MachineCon GCC Summit 2024, CP Gurnani, co-founder of AIonOS and former CEO of Tech Mahindra, challenged OpenAI CEO Sam Altman’s claim that India couldn’t develop its own LLMs. 

      Gurnani revealed that Tech Mahindra developed an Indian LLM for local languages and over 37 dialects in just five months with a budget of under $5 million. 

      Gurnani used companies like IndiGo and Airtel as examples of how they have competed with giants like Jio to discuss how India’s success hinges on “frugal innovation.” He also introduced his new venture, AIonOS, which aims to disrupt industries like travel and logistics through AI.

      Tech Mahindra also launched Project Indus, an indigenous LLM focused on Indic languages and dialects to improve linguistic inclusivity in AI.

      4. ‘Scaling AI for Billions: The Indian Perspective’ by Wadhwani AI CEO Shekar Sivasubramanian

        At Cypher 2024, Wadhwani AI CEO Shekar Sivasubramanian discussed the concept of applied AI and the challenges of working in India’s vast and diverse ecosystem. 

        He pointed out that applied AI lies in bridging the gap between the chaotic, unstructured AI ecosystem and the rigorous, systematic research world. In India, deployment must come before AI development with a focus on solving real-world problems rather than abstract ones. 

        Sivasubramanian also highlighted the importance of a “market-of-one” approach, where AI solutions are tailored to specific needs, particularly in government and rural settings.

        He outlined the differences between designing AI for ‘India’ – the urban, well-connected population – and ‘Bharat’ – where users often have limited experience with technology. He stressed that AI must be simple, intuitive, and practical for everyday users.

        5. ‘Navigating Data Chaos: Using Gen AI to Extract Structured Insights from Unstructured Customer Data’ by NoBroker Data Sciences and Engineering Director Zaher Abdul Azeez

          At AIM’s Data Engineering Summit 2024, Zaher Abdul Azeez, director of data sciences and engineering at NoBroker, discussed the potential of GenAI in transforming customer-facing services by extracting valuable insights from unstructured customer data, such as conversations.  

          Azeez pointed out that customer conversations, which tend to be subjective and informal, are extremely valuable for businesses, particularly those centred around customer experience. Traditional methods of analysing these interactions are manual and labour-intensive. However, GenAI, especially LLMs, offers a more efficient solution by understanding and processing unstructured data from these conversations.

          6. ‘Generative AI and the Road to Singularity’ by Tech Whisperer Founder Jaspreet Bindra

            At MLDS 2024, Jaspreet Bindra, founder of Tech Whisperer and CEO of Ai&Beyond, opened his keynote by exploring the concept of singularity and its implications for AI. He raised the question of whether AI could surpass human intelligence to become self-sufficient and leave humans obsolete. 

            Bindra delved into the varying predictions of AGI timelines. He cited experts like Sam Altman and Ray Kurzweil, who have different definitions and timelines for AGI’s arrival, ranging from 2026 to 2030.

            7. ‘The Next Tech Superpower: How India Can Lead the World in AI Innovation’ by Former Infosys CFO Mohandas Pai

              At Cypher 2024, former Infosys CFO and board member Mohandas Pai spoke about the growing technological partnership between India and the United States, positioning them as leading global digital powers. He contrasted this collaboration with China’s isolation due to its restrictive digital firewall, which limits outside influence and internal connectivity with the global tech ecosystem. 

              Pai drew attention to the close ties between Bengaluru and Silicon Valley, as well as their shared innovation culture and extensive research collaborations. Although Bengaluru boasts the world’s largest talent pool of chip designers, testers, and embedded software professionals, with over 3.50 lakh experts, Pai noted that the city requires more capital and competitive investment to maximise its potential.

              Despite political differences, Pai described the US and India as “connected at the hip” in technology, serving as a “force multiplier” for mutual growth. In contrast, he criticised Delhi for its lack of progress on domestic issues like pollution.

              8. ‘Powering India’s AI-First Ambitions With Shakti Cloud’ by Yotta CEO Sunil Gupta

                Sunil Gupta, co-founder, managing director and CEO of Yotta, spoke at AIM Cypher 2024 and shared key developments regarding the company’s advancements in AI infrastructure.

                Yotta, backed by the Hiranandani Group, had made significant progress in acquiring GPUs to support the AI boom in India. Last year, the company announced plans to acquire 32,000 NVIDIA GPUs over the next two years and had already secured 16,000 NVIDIA H100 GPUs.

                As an elite NVIDIA partner, Gupta shared that Yotta’s partnership had ensured access to the latest GPUs. This enabled the company to meet a wide range of AI use cases in India, from developing large-scale models to smaller ones. 

                9. ‘Voice Based AI Agents’ by Sarvam AI Co-Founder Vivek Raghavan 

                  At Cypher 2024, Sarvam AI co-founder Vivek Raghavan discussed the company’s mission to develop voice-based AI solutions tailored to Indian languages and dialects. 

                  He highlighted that India’s culture of conversation drives its focus on voice-led models in local languages. Raghavan demonstrated Sarvam’s voice agents, which operate via telephone and WhatsApp, allowing users to interact in languages like Kannada and Hindi for tasks such as booking appointments and customer support. 

                  10. ‘Impact Investing in AI Merging Profit with Purpose’ by Ronnie Screwvala 

                    At Cypher 2024, upGrad co-founder Ronnie Screwvala talked about how AI can help India achieve its Viksit Bharat vision by 2047 by aiming for a GDP growth from $3.4 trillion to $30 trillion. 

                    He said that AI should be seen as a tool for enhancing capabilities rather than a threat. Screwvala stressed the importance of AI in maximising intellectual property creation and fostering innovation. 

                    Read: 6 Must-Attend Conferences for Developers by AIM in 2025

                    ]]>
                    LLMs that Failed Miserably in 2024 https://analyticsindiamag.com/ai-trends/llms-that-failed-miserably-in-2024/ Fri, 03 Jan 2025 06:33:48 +0000 https://analyticsindiamag.com/?p=10160698 Databricks spent $10 million developing DBRX, yet only recorded 23 downloads on Hugging Face last month.]]>

                    Looks like the race to build large language models is winding down, with only a few clear winners. Among them, DeepSeek V3 has claimed the spotlight in 2024, leading the charge for Chinese open-source models. Competing head-to-head with closed-source giants like GPT-4 and Claude 3.5, DeepSeek V3 notched 45,499 downloads last month, standing tall alongside Meta’s Llama 3.1 (491,629 downloads) and Google’s Gemma 2 (377,651 downloads), according to Hugging Face.

                    But not all LLMs launched this year could ride the wave of success—some fell flat, failing to capture interest despite grand promises. Here’s a look at the models that couldn’t make their mark in 2024.

                    1. Databricks DBRX

                    Databricks launched DBRX, an open-source LLM with 132 billion parameters, in March 2024. It uses a fine-grained MoE architecture that activates four of 16 experts per input, with 36 billion active parameters. The company claimed that the model outperformed closed-source counterparts like GPT-3.5 and Gemini 1.5 Pro. 

                    However, since its launch, there has been little discussion about its adoption or whether enterprises find it suitable for building applications. The Mosaic team, acquired by Databricks in 2023 for $1.3 billion, led its development, and the company spent $10 million to build DBRX. But sadly, the model saw an abysmal 23 downloads on Hugging Face last month.

                    2. Falcon 2 

                    In May, the Technology Innovation Institute (TII), Abu Dhabi, released its next series of Falcon language models in two variants: Falcon-2-11B and Falcon-2-11B-VLM. The Falcon 2 models showed impressive benchmark performance, with Falcon-2-11B outperforming Meta’s Llama 3 8B and matching Google’s Gemma 7B, as independently verified by the Hugging Face leaderboard. 

                    However, later in the year, Meta released Llama 3.2 and Llama 3.3, leaving Falcon 2 behind. According to Hugging Face, Falcon-2-11B-VLM recorded just around 1,000 downloads last month.

                    3. Snowflake Arctic 

                    In April, Snowflake launched Arctic LLM, a model with 480B parameters and a dense MoE hybrid Transformer architecture using 128 experts. The company proudly stated that it spent just $2 million to train the model, outperforming DBRX in tasks like SQL generation. 

                    The company’s attention on DBRX suggested an effort to challenge Databricks. Meanwhile, Snowflake acknowledged that models like Llama 3 outperformed it on some benchmarks.

                    4. Stable LM 2 

                    Stability AI launched the Stable LM 2 series in January last year, featuring two variants: Stable LM 2 1.6B and Stable LM 2 12B. The 1.6B model, trained on 2 trillion tokens, supports seven languages, including Spanish, German, Italian, French, and Portuguese, and outperforms models like Microsoft’s Phi-1.5 and TinyLlama 1.1B in most tasks.

                    Stable LM 2 12B, launched in May, offers 12 billion parameters and is trained on 2 trillion tokens in seven languages. The company claimed that the model competes with larger ones like Mixtral, Llama 2, and Qwen 1.5, excelling in tool usage for RAG systems. However, the latest user statistics tell a different story, with just 444 downloads last month.

                    5. Nemotron-4 340B 

                    Nemotron-4-340B-Instruct is an LLM developed by NVIDIA for synthetic data generation and chat applications. Released in June 2024, it is part of the Nemotron-4 340B series, which also includes the Base and Reward variants. Despite its features, the model has seen minimal uptake, recording just around 101 downloads on Hugging Face in December, 2024.

                    6. Jamba 

                    AI21 Labs introduced Jamba in March 2024, an LLM that combines Mamba-based structured state space models (SSM) with traditional Transformer layers. The Jamba family includes multiple versions, such as Jamba-v0.1, Jamba 1.5 Mini, and Jamba 1.5 Large.

                    With its 256K token context window, Jamba can process much larger chunks of text than many competing models, sparking initial excitement. However, the model failed to capture much attention, garnering only around 7K downloads on Hugging Face last month.

                    7. AMD OLMo 

                    AMD entered the open-source AI arena in late 2024 with its OLMo series of Transformer-based, decoder-only language models. The OLMo series includes the base OLMo 1B, OLMo 1B SFT (Supervised Fine-Tuned), and OLMo 1B SFT DPO (aligned with human preferences via Direct Preference Optimisation). 

                    Trained on 16 AMD Instinct MI250 GPU-powered nodes, the models achieved a throughput of 12,200 tokens/sec/gpu. 

                    The flagship OLMo 1B model features 1.2 billion parameters, 16 layers, 16 heads, a hidden size of 2048, a context length of 2048 tokens, and a vocabulary size of 50,280, targeting developers, data scientists, and businesses. Despite this, the model failed to gain any traction in the community.

                    ]]>
                    Top AI Courses by NVIDIA for Free in 2025 https://analyticsindiamag.com/ai-trends/free-ai-courses-by-nvidia/ Thu, 02 Jan 2025 08:46:13 +0000 https://analyticsindiamag.com/?p=10117452 All the courses can be completed in less than eight hours.]]>

                    NVIDIA is one of the most influential hardware giants in the world. Apart from its much sought-after GPUs, the company also provides free courses to help you understand more about generative AI, GPU, robotics, chips, and more. 

                    Most importantly, all of these are available free of cost and can be completed in less than a day. Let’s take a look at them.

                    1. Building RAG Agents for LLMs

                    Building RAG Agents for LLMs course is available for free for a limited time. It explores the revolutionary impact of large language models (LLMs), particularly retrieval-based systems, which are transforming productivity by enabling informed conversations through interaction with various tools and documents. Designed for individuals keen on harnessing these systems’ potential, the course emphasises practical deployment and efficient implementation to meet the demands of users and deep learning models. Participants will delve into advanced orchestration techniques, including internal reasoning, dialog management, and effective tooling strategies.

                    In this workshop you will learn to develop an LLM system that interacts predictably with users by utilising internal and external reasoning components.

                    Course link: https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-FX-15+V1

                    2. Accelerating Data Science Workflows with Zero Code Changes

                    Efficient data management and analysis are crucial for companies in software, finance, and retail. Traditional CPU-driven workflows are often cumbersome, but GPUs enable faster insights, driving better business decisions. 

                    In this workshop, one will learn to build and execute end-to-end GPU-accelerated data science workflows for rapid data exploration and production deployment. Using RAPIDS™-accelerated libraries, one can apply GPU-accelerated machine learning algorithms, including XGBoost, cuGraph’s single-source shortest path, and cuML’s KNN, DBSCAN, and logistic regression. 

                    More details on the course can be checked here – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+T-DS-03+V1

                    3. Generative AI Explained

                    This self-paced, free online course introduces generative AI fundamentals, which involve creating new content based on different inputs. Through this course, participants will grasp the concepts, applications, challenges, and prospects of generative AI. 

                    Learning objectives include defining generative AI and its functioning, outlining diverse applications, and discussing the associated challenges and opportunities. All you need to participate is a basic understanding of machine learning and deep learning principles.

                    To learn the course and know more in detail check it out here – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-NP-01+V1

                    4. Digital Fingerprinting with Morpheus

                    This one-hour course introduces participants to developing and deploying the NVIDIA digital fingerprinting AI workflow, providing complete data visibility and significantly reducing threat detection time. 

                    Participants will gain hands-on experience with the NVIDIA Morpheus AI Framework, designed to accelerate GPU-based AI applications for filtering, processing, and classifying large volumes of streaming cybersecurity data. 

                    Additionally, they will learn about the NVIDIA Triton Inference Server, an open-source tool that facilitates standardised deployment and execution of AI models across various workloads. No prerequisites are needed for this tutorial, although familiarity with defensive cybersecurity concepts and the Linux command line is beneficial.

                    To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+T-DS-02+V2/

                    5. Building A Brain in 10 Minutes

                    This course delves into neural networks’ foundations, drawing from biological and psychological insights. Its objectives are to elucidate how neural networks employ data for learning and to grasp the mathematical principles underlying a neuron’s functioning. 

                    While anyone can execute the code provided to observe its operations, a solid grasp of fundamental Python 3 programming concepts—including functions, loops, dictionaries, and arrays—is advised. Additionally, familiarity with computing regression lines is also recommended.

                    To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+T-FX-01+V1/

                    6. An  Introduction to CUDA

                    This course delves into the fundamentals of writing highly parallel CUDA kernels designed to execute on NVIDIA GPUs. 

                    One can gain proficiency in several key areas: launching massively parallel CUDA kernels on NVIDIA GPUs, orchestrating parallel thread execution for large dataset processing, effectively managing memory transfers between the CPU and GPU, and utilising profiling techniques to analyse and optimise the performance of CUDA code. 

                    Here is the link to know more about the course – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+T-AC-01+V1

                    7. Augment your LLM Using RAG

                    Retrieval Augmented Generation (RAG), devised by Facebook AI Research in 2020, offers a method to enhance a LLM output by incorporating real-time, domain-specific data, eliminating the need for model retraining. RAG integrates an information retrieval module with a response generator, forming an end-to-end architecture. 

                    Drawing from NVIDIA’s internal practices, this introduction aims to provide a foundational understanding of RAG, including its retrieval mechanism and the essential components within NVIDIA’s AI Foundations framework. By grasping these fundamentals, you can initiate your exploration into LLM and RAG applications.

                    To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:NVIDIA+S-FX-16+v1/

                    8. Getting Started with AI on Jetson Nano

                    The NVIDIA Jetson Nano Developer Kit empowers makers, self-taught developers, and embedded technology enthusiasts worldwide with the capabilities of AI. 

                    This user-friendly, yet powerful computer facilitates the execution of multiple neural networks simultaneously, enabling various applications such as image classification, object detection, segmentation, and speech processing. 

                    Throughout the course, participants will utilise Jupyter iPython notebooks on Jetson Nano to construct a deep learning classification project employing computer vision models

                    By the end of the course, individuals will possess the skills to develop their own deep learning classification and regression models leveraging the capabilities of the Jetson Nano.

                    Here is the link to know more about the course – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-RX-02+V2

                    9. Building Video AI Applications at the Edge on Jetson Nano

                    This self-paced online course aims to equip learners with skills in AI-based video understanding using the NVIDIA Jetson Nano Developer Kit. Through practical exercises and Python application samples in JupyterLab notebooks, participants will explore intelligent video analytics (IVA) applications leveraging the NVIDIA DeepStream SDK. 

                    The course covers setting up the Jetson Nano, constructing end-to-end DeepStream pipelines for video analysis, integrating various input and output sources, configuring multiple video streams, and employing alternate inference engines like YOLO. 

                    Prerequisites include basic Linux command line familiarity and understanding Python 3 programming concepts. The course leverages tools like DeepStream, TensorRT, and requires specific hardware components like the Jetson Nano Developer Kit. Assessment is conducted through multiple-choice questions, and a certificate is provided upon completion. 

                    For this course, you will require hardware including the NVIDIA Jetson Nano Developer Kit or the 2GB version, along with compatible power supply, microSD card, USB data cable, and a USB webcam. 

                    To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+S-IV-02+V2/

                    10. Build Custom 3D Scene Manipulator Tools on NVIDIA Omniverse

                    This course offers practical guidance on extending and enhancing 3D tools using the adaptable Omniverse platform. Taught by the Omniverse developer ecosystem team, participants will gain skills to develop advanced tools for creating physically accurate virtual worlds. 

                    Through self-paced exercises, learners will delve into Python coding to craft custom scene manipulator tools within Omniverse. Key learning objectives include launching Omniverse Code, installing/enabling extensions, navigating the USD stage hierarchy, and creating widget manipulators for scale control. 

                    The course also covers fixing broken manipulators and building specialised scale manipulators. Required tools include Omniverse Code, Visual Studio Code, and the Python Extension. Minimum hardware requirements comprise a desktop or laptop computer equipped with an Intel i7 Gen 5 or AMD Ryzen processor, along with an NVIDIA RTX Enabled GPU with 16GB of memory. 

                    To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+S-OV-06+V1/

                    11. Getting Started with USD for Collaborative 3D Workflows

                    In this self-paced course, participants will delve into the creation of scenes using human-readable Universal Scene Description ASCII (USDA) files. 

                    The programme is divided into two sections: USD Fundamentals, introducing OpenUSD without programming, and Advanced USD, using Python to generate USD files. 

                    Participants will learn OpenUSD scene structures and gain hands-on experience with OpenUSD Composition Arcs, including overriding asset properties with Sublayers, combining assets with References, and creating diverse asset states using Variants.

                    To learn more about the details of the course, here is the link – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-FX-02+V1

                    12. Assemble a Simple Robot in Isaac Sim

                    This course offers a practical tutorial on assembling a basic two-wheel mobile robot using the ‘Assemble a Simple Robot’ guide within the Isaac Sim GPU platform. The tutorial spans around 30 minutes and covers key steps such as connecting a local streaming client to an Omniverse Isaac Sim server, loading a USD mock robot into the simulation environment, and configuring joint drives and properties for the robot’s movement. 

                    Additionally, participants will learn to add articulations to the robot. By the end of the course, attendees will gain familiarity with the Isaac Sim interface and documentation necessary to initiate their own robot simulation projects. 

                    The prerequisites for this course include a Windows or Linux computer capable of installing Omniverse Launcher and applications, along with adequate internet bandwidth for client/server streaming. The course is free of charge, with a duration of 30 minutes, focusing on Omniverse technology. 

                    To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+T-OV-01+V1/

                    13. How to Build Open USD Applications for industrial twins

                    This course introduces the basics of the Omniverse development platform. One will learn how to get started building 3D applications and tools that deliver the functionality needed to support industrial use cases and workflows for aggregating and reviewing large facilities such as factories, warehouses, and more. 

                    The learning objectives include building an application from a kit template, customising the application via settings, creating and modifying extensions, and expanding extension functionality with new features. 

                    To learn the course and know more in detail check it out here – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-OV-13+V1

                    14. Disaster Risk Monitoring Using Satellite Imagery

                    Created in collaboration with the United Nations Satellite Centre, the course focuses on disaster risk monitoring using satellite imagery, teaching participants to create and implement deep learning models for automated flood detection. The skills gained aim to reduce costs, enhance efficiency, and improve the effectiveness of disaster management efforts. 

                    Participants will learn to execute a machine learning workflow, process large satellite imagery data using hardware-accelerated tools, and apply transfer-learning for building cost-effective deep learning models. 

                    The course also covers deploying models for near real-time analysis and utilising deep learning-based inference for flood event detection and response. Prerequisites include proficiency in Python 3, a basic understanding of machine learning and deep learning concepts, and an interest in satellite imagery manipulation. 

                    To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+S-ES-01+V1/

                    15. Introduction to AI in the Data Center

                    In this course, you will learn about AI use cases, machine learning, and deep learning workflows, as well as the architecture and history of GPUs.  With a beginner-friendly approach, the course also covers deployment considerations for AI workloads in data centres, including infrastructure planning and multi-system clusters. 

                    The course is tailored for IT professionals, system and network administrators, DevOps, and data centre professionals. 

                    To learn the course and know more in detail check it out here – https://www.coursera.org/learn/introduction-ai-data-center

                    16. Fundamentals of Working with Open USD

                    In this course, participants will explore the foundational concepts of Universal Scene Description (OpenUSD), an open framework for detailed 3D environment creation and collaboration. 

                    Participants will learn to use USD for non-destructive processes, efficient scene assembly with layers, and data separation for optimised 3D workflows across various industries. 

                    Also, the session will cover Layering and Composition essentials, model hierarchy principles for efficient scene structuring, and Scene Graph Instancing for improved scene performance and organisation.

                    To know more about the course check it out here – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-OV-15+V1

                    17. Introduction to Physics-informed Machine Learning with Modulus 

                    High-fidelity simulations in science and engineering are hindered by computational expense and time constraints, limiting their iterative use in design and optimisation. 

                    NVIDIA Modulus, a physics machine learning platform, tackles these challenges by creating deep learning models that outperform traditional methods by up to 100,000 times, providing fast and accurate simulation results.

                    One will learn how Modulus integrates with the Omniverse Platform and how to use its API for data-driven and physics-driven problems, addressing challenges from deep learning to multi-physics simulations.

                    To learn the course and know more in detail check it out here – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-OV-04+V1

                    18. Introduction to DOCA for DPUs

                    The DOCA Software Framework, in partnership with BlueField DPUs, enables rapid application development, transforming networking, security, and storage performance. 

                    This self-paced course covers DOCA fundamentals for accelerated data centre computing on DPUs, including visualising the framework paradigm, studying BlueField DPU specs, exploring sample applications, and identifying opportunities for DPU-accelerated computation. 

                    One gains introductory knowledge to kickstart application development for enhanced data centre services.

                    To learn the course and know more in detail check it out here – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-NP-01+V1

                    The story was updated on 2nd Jan, 25 to reflect the latest courses and correct the URLs to them.

                    ]]>
                    Must-Have Skills for Indian Graduates to Land a Developer’s Job in 2025 https://analyticsindiamag.com/ai-trends/top-skills-for-landing-a-developer-job-as-a-2025-graduate-in-india-2/ Sun, 29 Dec 2024 03:30:00 +0000 https://analyticsindiamag.com/?p=10147947 Startups and tech giants across India are actively seeking Python and Java-proficient developers to drive their AI initiatives. ]]>

                    As India experiences a surge in AI job opportunities, graduates entering the job market in 2025 will need to master a strong set of skills to stay ahead of the competition. While speculations and discussions among developers on Reddit suggest a 5-6 month grind for building skills, the right direction to follow remains unclear.

                    The demand for skilled AI and software engineers is set to soar, considering that India’s tech industry anticipates a 9% growth in 2025, driven by sectors like IT, retail, telecom, and BFSI.

                    Based on current trends, here are the top skills for landing a job in India as a 2025 graduate starting from scratch:

                    Core Programming Skills

                    • Python: The demand for Python remains high due to its versatility and extensive use in web development, data science, automation, and AI.

                    Python, the language that became the most used language in 2024, is the top choice for job seekers who want to pursue any career in AI. Its simplicity and versatility have strengthened its status as the go-to language for AI and machine learning development. While C++ is still taught in universities, getting into the industry and building AI products requires the knowledge of Python.

                    From startups to tech giants, companies across India are actively seeking Python-proficient developers to drive their AI initiatives. Learning the core language, however, is just not enough.

                    Apart from being proficient in handling APIs, engineers also need to be well-versed in libraries such as TensorFlow, Keras, and PyTorch. These, along with pandas, NumPy, and Matplotlib for data science and Django and Flask for web development, are equally important.

                    • JavaScript ecosystem: Tools and libraries such as Node.js, React, Angular, and the MERN stack (MongoDB, Express.js, React, Node.js) continue to dominate web development.

                    JavaScript’s role extends beyond web development; it has become increasingly important in AI, particularly for deploying machine learning models in web applications. Frameworks like TensorFlow.js allow developers to run AI models directly in the browser, enhancing user experiences without server-side computations. 

                    Why MERN Stack?

                    The MERN stack is a popular framework for building dynamic web applications. Its relevance extends to AI when developing platforms that require real-time data interaction and user engagement. 

                    Companies favour candidates with MERN stack experience to create scalable and AI-integrated web solutions that enhance user experiences. MERN is getting increasingly competitive, so staying ahead of the curve requires extensive practice and training. 

                    • SQL and MongoDB: SQL remains critical for structured data management, while MongoDB caters to NoSQL database needs, which is essential for modern and flexible data applications.

                    Most Sought-After Skills

                    1. Data Structures and Algorithms (DSA):

                    • Why: Fundamental for clearing coding interviews across software development roles.
                    • Languages: Java and Python are popular choices for practising DSA. Java is often preferred for deeper understanding in Indian hiring scenarios.

                    2. Backend Development:

                    • Tech Stack: Java full stack (Spring Boot) or Python (Django or Flask)
                    • Why: Java continues to dominate backend applications, and Python is growing in demand.
                    • Entry Point: It is ideal for starting with MNCs, as Java remains the backbone of many enterprise applications.

                    3. Frontend Development:

                    • Tech Stack: React.js (part of MERN) or Angular
                    • Why: Frontend roles are abundant but competitive, and React is a market favourite.

                    4. Full Stack Development:

                    • Tech Stack: MERN
                    • Why: Many companies look for developers capable of handling both frontend and backend. However, the competition is high.

                    5. Data Analysis and Transition to Machine Learning:

                    • Skills: Python, SQL, Excel, Tableau and Power BI are relevant skills for entry-level data analysis roles.
                    • Next Steps: Transition into data engineering (PySpark, ETL) or machine learning (TensorFlow, PyTorch).

                    6. Cloud Computing:

                    • Platforms: Amazon Web Services (AWS), Azure, Google Cloud
                    • Skills: Docker, Kubernetes, and basic DevOps tools must be learnt to enhance employability.

                    7. Industry-Relevant Projects:

                    • Key Technologies: React, AWS, Docker, Spring Boot
                    • Why: Companies prioritise candidates with practical experience in modern tools over academic projects.

                    8. Text Editors

                    Master editors like VS Code, Sublime Text, or Atom boost productivity with features like syntax highlighting and code completion.

                    9. Integrated Development Environments (IDE)

                    Tools like PyCharm, Visual Studio, and IntelliJ IDEA streamline development with error highlighting and automation.

                    10. Object-Oriented Design (OOD)

                    • Apply principles like inheritance, encapsulation, and polymorphism.
                    • Design modular, scalable, and maintainable software architectures.

                    11. Cross-Platform Development

                    • Build apps for multiple platforms using Flutter, React Native, or Xamarin.
                    • Ensure seamless user experience across devices and operating systems.

                    12. Prioritisation Based on Scenarios:

                    • If interested in data, focus on Python, SQL, and Excel for Data Analysis.
                    • If aiming for development, start with Java and DSA and move to backend or full-stack development.
                    • If undecided, begin with DSA and a versatile language like Python, which can later transition into ML or web development.

                    13. Data Tools

                    Excel remains a fundamental tool for basic analysis, while Tableau, Power BI, Qlik Sense and QlikView offer advanced visualisation and business intelligence capabilities.

                    Must-Have Skills for Data Engineers:

                    • Cloud Platforms: Expertise in AWS, Azure, and Google Cloud Platform (GCP) is vital for managing and deploying cloud-based data infrastructure.
                    • Database Management: It is crucial to have knowledge of both relational (e.g., MySQL, PostgreSQL) and non-relational (e.g., MongoDB, Cassandra) databases.
                    • Data Pipelines and Orchestration: Familiarity with tools like Airflow (workflow orchestration), Kafka (real-time data processing), and ETL pipelines is critical for creating efficient data workflows.
                    • Snowflake: Increasingly recognised as a powerful data platform for storage and analytics, learning Snowflake is a must for data professionals.
                    ]]>
                    Top 10 Infographics of 2024 by AIM Media House | Editor’s Choice https://analyticsindiamag.com/ai-trends/editors-choice-best-infographics-of-2024-by-aim-media-house/ Sat, 28 Dec 2024 10:39:57 +0000 https://analyticsindiamag.com/?p=10147952 Whether it’s about emerging trends, cutting-edge technologies, or deep industry analyses, we aim to provide a visual delight that informs, inspires, and engages. ]]>

                    At AIM Media House, we believe in transforming complex data into visually captivating stories. A single image can convey what a thousand words often cannot, and hence, each one of our infographics is crafted to deliver unparalleled insights, empowering readers to make informed decisions and stay ahead of the curve. 

                    Whether it’s about emerging trends, cutting-edge technologies, or deep industry analyses, we aim to provide a visual delight that informs, inspires, and engages.

                    In the past year alone, AIM has created hundreds of infographics, each meticulously designed to simplify data and amplify understanding, reflecting our commitment to providing the best visual content in the industry.

                    Here, we present some of the most shared and celebrated infographics of 2024—pieces that sparked conversations, guided decisions, and brought clarity to complexity. 

                    Yeh Dil MMAANG (AI) More 

                    This infographic, which first appeared in The Belamy, our most-read weekly newsletter, on November 4, 2024, highlighted the meteoric rise of AI-driven growth across big tech and showcased the pivotal role AI plays in shaping their strategies and revenues.

                    The title, ‘Yeh Dil MMAANG (AI) More’, is inspired by Pepsi’s iconic slogan “Yeh Dil Maange More,” beautifully capturing the relentless pursuit of AI innovation. From Microsoft’s transformative Copilot empowering 70% of the Fortune 500 to Amazon’s explosive generative AI adoption and Apple’s innovative integration of ChatGPT, each company is redefining its future. 

                    Key Infrastructure Players in AI 

                    This infographic (The Belamy, September 16, 2024) provided a comprehensive overview of the key players shaping AI infrastructure today. From AMD’s dominance in GPUs and PCs, NVIDIA’s advancements in AI hardware, and Intel’s innovations to foundational chip manufacturers like TSMC and Arm, the ecosystem showcases a dynamic interplay of design and performance. Networking giants like Nokia and Cisco ensure seamless integration, while trailblazers like SambaNova and Cerebras push the boundaries in training and inference capabilities. 

                    10 Years of OpenAI

                    This infographic captured the transformative decade of OpenAI. From the debut of GPT-1 in 2018 to the revolutionary advancements of GPT-4, DALL·E, and ChatGPT Enterprise, OpenAI has consistently redefined AI applications across industries. Notable innovations like Canvas and ChatGPT Search have expanded their utility, while the introduction of cost-efficient models like GPT-4o Mini showcases its commitment to accessibility.

                    Daily Power Consumption by ChatGPT

                    This infographic featured in the AIM article ‘95% Less Energy Consumption in Neural Networks Can be Achieved. Here’s How’. It highlighted the massive energy requirements of ChatGPT, which consumes 500,000 kilowatt-hours daily, equivalent to powering 17,000 US households or 62,500 Indian households. This underscores the critical need for innovation in energy-efficient AI to balance performance and sustainability.

                    Energy Consumption Comparison: ChatGPT vs Google Search

                    energy consumption by google vs chatgpt

                    Published in the same article, this comparison demonstrates the stark disparity in energy consumption between ChatGPT and Google Search. A single ChatGPT query uses 2.9 watt-hours, almost 10 times more than Google Search’s 0.3 watt-hours. The infographic calls attention to the urgency of developing energy-efficient techniques for AI systems to reduce their environmental impact while maintaining scalability.

                    OpenAI Mafia 

                    This infographic, featured in the article ‘The OpenAI Mafia Just Got Bigger’, throws light on the dynamic network of former OpenAI employees who have gone on to launch over 30 AI startups after leaving the organisation. With notable names like Andrej Karpathy, co-founder of Eureka Labs, and Ilya Sutskever, who now leads Safe Superintelligence, this thriving alumni network is shaping the AI landscape. OpenAI’s Mira Murati also quit recently to pursue her passion. 

                    Google’s AI Era Unleashed 

                    2024 marked the dawn of the Gemini ‘Thought’ Era, setting a new standard in AI reasoning and transparency. This infographic captured Google’s relentless advancements in AI, culminating in the launch of Gemini 2.0 Flash Thinking—a model that not only solves complex problems with lightning speed but also reveals its thought process for unmatched transparency.

                    Accenture’s $4.2 Bn GenAI Boom 



                    This infographic, featured in The Belamy (December 23, 2024), illustrated Accenture’s remarkable strides in generative AI, positioning itself as a trailblazer reshaping the IT landscape. With $1.2 billion in bookings and $500 million in revenue this quarter alone, Accenture’s cumulative $4.2 billion in GenAI bookings and $1.4 billion in sales since September 2023 underscore its critical role in driving large-scale transformations.

                    Top 25 GCC Heads India 2024

                    This infographic, featured in AIM’s ‘Top 25 GCC Heads India 2024’ on LinkedIn, celebrated the leaders transforming global capability centres (GCCs) into innovation powerhouses in India. These visionary executives are driving technological advancements, fostering efficiency, and shaping strategies that align with global goals while navigating India’s unique market dynamics.

                    Bengaluru: The GCC Capital 

                    This infographic highlighted Bengaluru’s dominant role as India’s global capability centre (GCC) hub. With 875 GCCs, accounting for 30% of India’s total, Bengaluru employs 6 lakh professionals and contributes $22.2 billion annually to the economy. The Karnataka government’s GCC policy 2024 aims to further this growth by attracting 500 new GCCs and creating 3.5 lakh jobs by 2029, targeting a total economic output of $50 billion.

                    ]]>
                    AIM’s Top Social Media Posts That Went Viral in 2024 https://analyticsindiamag.com/ai-trends/top-social-media-posts-by-aim-in-2024-that-went-viral/ Fri, 27 Dec 2024 04:30:00 +0000 https://analyticsindiamag.com/?p=10147824 From Salesforce India’s historic revenue surge to Meesho’s open-source ML platform, viral posts showcased the impact of technology across industries. ]]>

                    In 2024, AIM Media House dominated the social media world. From Salesforce India’s historic revenue surge to Meesho’s open-source ML platform, viral posts showcased the impact of technology across industries. Whether it was celebrating leadership changes at Accenture or introducing the AIM 100 list of influential AI leaders, these moments sparked global conversations. 

                    On X, innovations like Cursor AI and the rise of young tech prodigies like Dhravya Shah captured the imagination of thousands. 

                    Here’s a roundup of the top social media moments by AIM that went viral.

                    LinkedIn:

                    Salesforce India Crosses the $1 Billion Milestone

                    Salesforce India achieved a remarkable 36% revenue surge, reaching the $1 billion mark for the first time. CEO Arundhati Bhattacharya celebrated the company’s exponential growth, expanding from 2,500 employees in 2020 to over 13,000 today. 

                    To further strengthen its presence, Salesforce announced plans to build its first Salesforce Tower in Bengaluru, which is slated to open in 2026. This will position the city as a future hub of innovation. Meanwhile, cricket legend Rahul Dravid joined Salesforce as the brand ambassador.

                    The Power of GCCs

                    This post highlighted how Global Capability Centers (GCCs) are reshaping India’s corporate landscape. These centres are not only operational hubs but are also encouraging innovations and employment across industries. The leaders behind these centres showcase how they are promoting efficiency, initiating technological growth, and contributing to India’s economy.

                    Karthik Narain Steps Into CTO Role at Accenture

                    Accenture’s appointment of Karthik Narain as the new chief technology officer was a major moment for the company. Narain succeeded the legendary Paul Daugherty, who retired after nearly four decades at the company. Narain, a seasoned tech leader, expressed his gratitude to Daugherty while acknowledging his contributions to Accenture’s tech journey. 

                    Meesho Goes Open-Source with its ML Platform

                    Meesho made waves at the NVIDIA Summit in Mumbai after it announced the open-sourcing of its ML platform. The announcement not only highlighted Meesho’s commitment to empowering developers globally but also opened up the tools that power one of India’s largest e-commerce platforms. With this move, Meesho aims to democratise AI and ML and enable developers and data scientists to access resources to accelerate innovation.

                    Top Computer Science Influencers You Need to Follow

                    AIM curated a list of India’s most influential computer science influencers who inspire the next generation of coders and tech enthusiasts. This list includes experts who offer a wealth of knowledge on AI, algorithms, and programming tips, making it a must-follow for anyone passionate about coding. From demystifying AI to offering no-nonsense career advice, these digital mentors are shaping the future of technology while keeping things entertaining.

                    X:

                    Cursor AI: A Game-Changer for Developers

                    Cursor AI from Anysphere took the internet by storm and empowered users of all ages to create impressive projects with ease. Whether it’s an eight-year-old child building a chatbot or someone creating a financial dashboard using voice commands, OpenAI-backed Cursor AI is revolutionising the accessibility of coding. 

                    With its seamless integration, customisation options, and GPT-4 assistance, the platform is making AI-driven development accessible to everyone, regardless of technical background.

                    Top AI Leaders Share Their Vision for the Future

                    A viral post on X showcased the insights of some of the most prominent AI experts, including Yann LeCun, François Chollet and Andrej Karpathy. This post provided a glimpse into the future of AI from the perspectives of these industry leaders, where everything from innovation to ethical challenges was discussed. The post was even shared by the experts themselves. 

                    The AIM 100

                    AIM unveiled the AIM 100, a list of the world’s most influential leaders in AI, in a post that quickly gained traction. Leaders like Elon Musk, Yann LeCun, Andrew Ng, Julie Sweet, CP Gurnani and Bhavish Aggarwal, among others, are pioneering AI development and bringing AI to every field possible. 

                    Swaayatt Robots Secures $4 Million Funding

                    A Bhopal-based AI and robotics startup, Swaayatt Robots, raised $4 million at a valuation of $151 million from US-based investors. This news captured attention not only because of the success of the funding but also because the startup is set to raise an additional $11 million soon, which could propel its valuation to nearly $200 million. A viral post on X showcased the rising influence of Indian startups in the global tech ecosystem.

                    Dhravya Shah: A 16-Year-Old Programming Prodigy

                    Dhravya Shah’s journey from programming at 16 to building an open-source alternative to Redis caught the eye of X users. Now 18, Shah’s ambitious project Radish, a database system inspired by Redis, is gaining attention for its unique approach and open-source model. His story of self-driven innovation became a viral inspiration for young developers, highlighting how age is no barrier to entrepreneurial success in the tech world.

                    Instagram:

                    Yotta Welcomes 4,000 NVIDIA H100 GPUs

                    India’s AI community celebrated a major milestone with NVIDIA’s crucial contribution to providing the hardware. The spotlight was on Yotta, a leading data centre company that made waves by receiving its first shipment of 4,000 NVIDIA H100 GPUs. It marked a new era of innovation in the Indian tech landscape.

                    Gukesh Dommaraju Makes History

                    AI cameras captured the emotional moment when Gukesh Dommaraju became the youngest World Chess Champion. With two cameras strategically placed to focus on the players, AI technology enhanced the broadcast by tracking focus flow and delivering real-time data – a game-changing moment in sports broadcasting.

                    Lennart Ootes, part of the broadcast team for the final match between Dommaraju and Ding Liren, shared insights on how AI technology elevated the broadcast experience. For the first time, AI cameras provided live data about focus flow, a testament to how rapidly AI is shaping the future of sports media.

                    Fireside Chat at NVIDIA AI Summit 2024 between Jensen Huang & Mukesh Ambani

                    The fireside chat between NVIDIA CEO Jensen Huang and Reliance Industries chairman Mukesh Ambani at the Nvidia AI Summit 2024 highlighted key AI trends. The conversation covered pivotal advancements, touching on topics shaping the future of AI in India and beyond and how India will become the biggest intelligence market in the world. 

                    Vox Pop in Streets of Bengaluru after Appraisal Season

                    Our creative team went to the streets of Bengaluru to ask techies about their yearly appraisals. While most of them were very happy with the increase in salary, some were sceptical about answering as they left their companies shortly after.

                    Nithyananda’s AI Chatbot

                    Nithyananda, the Indian self-styled godman, launched his own AI chatbot. The Ask Nithyananda chatbot was aimed at solving all the users’ problems and was trained on 27 years of his teachings. AIM asked the chatbot the location of Kailasa, the country he founded, and it revealed its exact location.

                    ]]>
                    6 Must-Attend Conferences for Developers by AIM in 2025 https://analyticsindiamag.com/ai-trends/6-must-attend-conferences-by-aim-in-2025/ Wed, 25 Dec 2024 10:30:00 +0000 https://analyticsindiamag.com/?p=10147682 Whether you're a data engineer, AI startup founder, developer, or corporate leader, these events offer unparalleled opportunities to learn, network, and stay ahead in the fast-paced tech landscape. ]]>

                    India is a bustling hub for AI, data science, and technology enthusiasts. And with its base in Bengaluru, AIM Media House is at the forefront of this tech transformation. With a mission to empower and connect professionals and organisations through knowledge and innovation, AIM organises some of the industry’s most influential conferences

                    From fostering diversity and inclusion in tech to exploring the latest advancements in generative AI, AIM’s conferences cater to a wide range of interests and expertise. Whether you’re a data engineer, AI startup founder, developer, or corporate leader, these events offer unparalleled opportunities to learn, network, and stay ahead in the fast-paced tech landscape. 

                    Here’s a quick look at AIM’s upcoming conferences next year, why you should attend them, and how they’re shaping the future of AI and technology in India.

                    MLDS 2025

                    MLDS is a haven for developers and data scientists looking to stay ahead in the ever-evolving world of artificial intelligence. Focused on the latest innovations in generative AI and software development, this three-day event offers something for everyone, from keynotes to hands-on workshops. 

                    With paper presentations and tech talks across three tracks, MLDS ensures you leave with actionable insights, whether you’re a beginner or an experienced AI practitioner.

                    The event’s emphasis on practical applications and industry case studies makes it a must-attend for professionals looking to integrate AI into their workflows. Moreover, the networking opportunities with AI leaders and like-minded peers are unparalleled. 

                    Register here

                    • Dates: February 5-7, 2025
                    • Venue: NIMHANS Convention Center, Bengaluru, India

                    The Rising 2025

                    As one of India’s most impactful conferences on diversity and inclusion in tech, The Rising addresses some of the most pressing issues in today’s workplaces. From practical strategies for fostering equity to inspiring success stories, this summit provides a deep dive into creating a culture of belonging. 

                    Leaders from top companies share their approaches to tackling DEI challenges, making it a rich learning ground for both individuals and organisations.

                    Whether you’re an HR professional, tech leader, or someone interested in creating equitable spaces, The Rising is the perfect platform to gain insights and actionable takeaways. 

                    Register here

                    • Dates: March 20-21, 2025
                    • Venue: J N Tata Auditorium, Bengaluru, India

                    Happy Llama 2025

                    The first edition of Happy Llama is not just a conference, it’s a celebration of India’s vibrant AI startup ecosystem. If you’re an entrepreneur or an investor, this is your chance to connect with the minds driving innovation in AI. The summit includes engaging pitch battles, insightful talks, and workshops tailored to startup needs, making it an unmissable opportunity for those seeking to learn and grow.

                    Whether you’re looking for funding, partnerships, or just some inspiration, Happy Llama provides the perfect platform for it all and more. With its dynamic format and energetic vibe, this one-day event at Bengaluru’s Radisson Blu is your gateway to networking with the best and brightest in AI startups.

                    Register here

                    • Date: April 25, 2025
                    • Venue: Hotel Radisson Blu, Bengaluru, India

                    Data Engineering Summit (DES) 2025

                    For professionals in data engineering, DES is the ultimate event to explore the latest tools, techniques, and trends shaping the field. As India’s first and only conference dedicated to data engineering, it delves into software deployment architectures, data frameworks, and scalable solutions for real-world problems.

                    This summit brings together top engineers and thought leaders to share their expertise, offering attendees a unique opportunity to enhance their skills and knowledge. Whether you work in analytics, machine learning, or cloud computing, DES is a must-attend event.

                    Register here

                    • Dates: May 15-16, 2025
                    • Venue: Taj Yeshwantpur, Bengaluru, India

                    MachineCon GCC Summit 2025

                    MachineCon GCC Summit is the perfect blend of vision and action, designed specifically for leaders in India’s global capability centres. Focused on the transformative potential of generative AI, the summit explores how GCCs can harness AI to drive operational excellence and innovation.

                    With a curated lineup of sessions featuring pioneers and experts, this two-day event offers strategic insights into the future of GCCs. 

                    Register here

                    • Dates: June 19-20, 2025
                    • Venue: Taj Yeshwantpur, Bengaluru, India

                    Cypher 2025

                    Saving the best for the last, we have Cypher, AIM’s flagship conference. Since its first edition in 2015, Cypher has grown exponentially to become not just India’s biggest AI summit but also the most impactful one. 

                    With over 5,000 attendees daily, the event brings together a diverse community of AI enthusiasts, professionals, and thought leaders. The agenda spans keynotes, panel discussions, and exhibitions, offering a comprehensive view of AI’s impact across industries.

                    Whether you’re a beginner curious about AI’s potential or an industry leader looking for the latest advancements, Cypher has something for everyone. 

                    Register here

                    • Dates: September 17-19, 2025
                    • Venue: KTPO @ Whitefield, Bengaluru, India
                    ]]>
                    AI Personalities Who Sparked Controversy in 2024 https://analyticsindiamag.com/ai-trends/ai-personalities-who-sparked-controversy-in-2024/ Tue, 24 Dec 2024 12:23:35 +0000 https://analyticsindiamag.com/?p=10147732 Scarlett Johansson publicly expressed her frustration after discovering that OpenAI had created a voice for its chatbot that she felt resembled hers too closely.]]>

                    While 2024 has been a year of progress, it has also been a hotbed of bold claims and heated controversies within the world of AI. This article looks back at the most controversial figures in AI in 2024 – individuals who stirred debates, challenged norms and redefined what AI can and should do.

                    Jürgen Schmidhuber

                    German computer scientist Jürgen Schmidhuber, known for his work on recurrent neural networks, has often argued that he and other researchers have not received adequate recognition for their contributions to deep learning. Instead, he claimed, Geoffrey Hinton, Yann LeCun, and Yoshua Bengio have received disproportionate credit.

                    Most recently, he alleged that Hinton’s Nobel Prize is based on uncredited work. Schmidhuber alleged that Hinton and Hopfield’s contributions were heavily influenced by existing research without inadequate acknowledgement.

                    “This is a Nobel Prize for plagiarism,” Schmidhuber wrote on LinkedIn. He argued that methodologies developed by Alexey Ivakhnenko and Shun’ichi Amari in the 1960s and 1970s, respectively, formed the foundation of the laureates’ work. 

                    “They republished methodologies developed in Ukraine and Japan without citing the original papers. Even in later surveys, they didn’t credit the original inventors,” Schmidhuber said, suggesting that the omission may have been intentional.

                    Rosalind Picard

                    Rosalind Picard, a professor at MIT Media Lab, recently faced controversy over alleged discriminatory remarks made during a keynote speech at NeurIPS 2024 towards Chinese students.

                    During her presentation, Picard mentioned an incident involving a Chinese student who had been expelled. The act drew criticism for appearing to single out nationality and reinforce harmful stereotypes. This prompted apologies from both Picard and the NeurIPS organisers and sparked discussions about inclusivity and respect within the AI research community.

                    Bhavish Aggarwal

                    Bhavish Aggarwal, founder of Ola’s AI chatbot Krutrim, made several notable statements about AI this year that sparked discussions. 

                    Earlier this year, Aggarwal framed India’s AI development in terms of data sovereignty and criticised what he termed “techno-colonialism” to describe the exploitation of developing countries by global tech giants through technology.

                    “India generates the largest amount of digital data in the world, but all of it is sitting in the West…They take our data out, process it into AI and then bring it back and sell it in dollars to us. It’s the same East India Company all over again,” he said.

                    His remarks triggered controversy, as critics noted that much of Ola’s early funding had come from global investment firms.

                    Meanwhile, following an incident where LinkedIn’s AI tool referred to him using gender-neutral pronouns, Aggarwal announced that Ola would shift from Microsoft’s Azure cloud platform to its own Krutrim cloud. Moreover, he called on other Indian companies to follow suit, which some interpreted as promoting anti-Western sentiment in the tech industry.

                    Eric Schmidt

                    Former Google CEO Eric Schmidt recently made several notable statements about AI and its potential risks. In interviews with ABC News and PBS, Schmidt warned that AI systems could reach a “dangerous point” when they can self-improve, suggesting that we need to consider “unplugging” them at that stage.

                    He expressed concern about computers running autonomously and making their own decisions, calling for human oversight to maintain “meaningful control” over autonomous weapons.

                    Mira Murati 

                    Mira Murati, former chief technology officer of OpenAI, found herself at the centre of controversy in March regarding the training data for Sora, OpenAI’s new text-to-video AI model. 

                    During an interview with The Wall Street Journal, Murati was asked about the specific sources of data used to train Sora. She revealed that the model was trained on “publicly available and licensed data”. However, when asked whether content from platforms like YouTube, Instagram, or Facebook was used to train the model, she responded with uncertainty, saying, “I’m actually not sure about that. I’m not confident about it.”

                    Hoan Ton-That

                    This year, Hoan Ton-That, CEO and co-founder of Clearview AI, remained a controversial figure in the AI industry due to his company’s facial recognition technology and its practices. 

                    Clearview AI faced significant legal issues, including a €30.5 million fine from the Dutch Data Protection Authority for maintaining an “illegal database” of billions of facial images. The company was also warned of additional penalties of up to €5.1 million for failing to comply with EU data protection laws.

                    Despite these challenges, Ton-That defended the company as he asserted that it only uses publicly available online data and compared its approach to Google’s photo search. He argued that Clearview’s technology plays a crucial role in law enforcement, citing its use in investigations into the January 6 Capitol riots. 

                    Scarlett Johansson

                    Earlier this year, Scarlett Johansson became embroiled in a major controversy regarding the alleged unauthorised use of her voice by OpenAI in ChatGPT. The issue surfaced when OpenAI unveiled a new voice feature called ‘Sky’, which many users noticed sounded strikingly similar to Johansson’s voice from her role in the movie ‘Her’.

                    The situation escalated when Johansson publicly expressed her frustration after discovering that OpenAI had created a voice for the chatbot that she felt resembled hers too closely. This came despite her having declined an offer from OpenAI in September 2022 to lend her voice to the project. Upon hearing a demo of the new voice, Johansson was reportedly shocked and upset, leading her to demand that OpenAI halt the use of the voice.

                    Prabhakar Raghavan 

                    Earlier this year, Prabhakar Raghavan, Google’s chief technologist, faced criticism over the company’s Gemini AI image generation feature. The controversy stemmed from Gemini producing historically inaccurate and overly diverse images in response to prompts about specific historical figures and events. 

                    For example, when prompted to create depictions of the Founding Fathers of the United States, the AI-generated images included individuals from various ethnic backgrounds, which did not align with historical records.

                    Raghavan admitted that the feature had fallen short and issued an apology for the inaccuracies. He explained that the AI model had been adjusted to promote diversity in its outputs, which occasionally resulted in overcorrection.

                    Elon Musk

                    Elon Musk has a love-hate relationship with OpenAI. The tech billionaire recently filed a preliminary injunction to stop OpenAI from switching to a for-profit model. Musk, who co-founded OpenAI, accused the company of antitrust violations and betraying its founding principles. His lawsuit, which now includes Microsoft as a defendant, argues that OpenAI has moved away from its original nonprofit mission to use AI research to benefit humanity. In response, OpenAI released emails and documents from 2017 showing that Musk had supported a for-profit structure and even sought majority control of the company. OpenAI CEO Sam Altman had publicly called Musk “a clear bully”.

                    ]]>
                    Top 10 Videos of 2024 by AIM – Editors’ Pick https://analyticsindiamag.com/ai-trends/top-10-videos-of-2024-by-aim-editors-pick/ Mon, 23 Dec 2024 10:44:21 +0000 https://analyticsindiamag.com/?p=10144167 Explore AIM's top 10 videos of 2024, showcasing cutting-edge AI innovations, industry leaders, and creative experiments shaping the future.]]>

                    AIM has a robust video content module known for producing over 100 curated videos annually, alongside content from our conferences and events. These videos offer insights into the latest advancements in AI, spotlight industry leaders, and showcase innovations shaping the future of technology. From enterprise strategies to quirky AI-driven experiments, here are the top 10 AIM videos of the year.

                    1. Redefining Mobility with AI | Ford Business Solutions

                    At Ford’s Business Solutions team in Chennai, AI is driving the future of mobility. This video dives into how nearly 1,000 professionals at Ford’s Global Data Insights & Analytics (GDIA) team are using AI and big data to transform the automotive industry. From connected cars to AI-powered logistics, Ford’s innovations extend far beyond vehicles, creating global impact.

                    Key Takeaway: Ford is at the forefront of AI-driven mobility, fostering a culture of innovation and inclusion.

                    2. Every Industry Will Transform Using AI | Ishit Vachhrajani, AWS

                    AWS Enterprise Strategist Ishit Vachhrajani takes center stage to discuss how generative AI is disrupting industries, from healthcare to manufacturing. This episode of Simulated Reality explores the role of AI-driven solutions in reshaping businesses globally.

                    Key Takeaway: AI will redefine industries at every level, creating transformative possibilities across the board.

                    3. Wipro: Best Firm for Data Scientists

                    Wipro’s recognition as the “Best Firm for Data Scientists” highlights their commitment to fostering talent and innovation. This video delves into how Wipro’s data scientists are contributing to real-world solutions, and why the company stands out as a leader in analytics and AI.

                    Key Takeaway: Wipro provides a stellar environment for data scientists, cementing its place as a top employer in AI and analytics.

                    4. AI: A Solution or a Problem? | Kailash Nadh, Zerodha

                    Zerodha CTO Kailash Nadh explores the intersection of AI, open-source culture, and fintech. This in-depth conversation reveals how Zerodha harnesses AI while staying grounded in practical, human-centric innovation.

                    Key Takeaway: AI is powerful but must align with real-world problems, not just hype.

                    5. Building AGI in India | Young Indic AI Developers

                    Three of India’s leading AI developers discuss their groundbreaking work on foundational models, AGI, and India’s AI ecosystem. This dynamic conversation explores the future of AI development in India.

                    Key Takeaway: India’s young AI talent is making significant strides toward AGI and foundational AI models.

                    6. Database Market in India | Raj Verma, Singlestore

                    In this engaging interview, Raj Verma, CEO of Singlestore, unpacks the $120 billion potential of India’s database market. Verma highlights Singlestore’s role in shaping real-time analytics and distributed SQL.

                    Key Takeaway: Real-time data and analytics are revolutionizing industries in India and beyond.

                    7. AI Meets the Kitchen | ChatGPT Recipe with Chef Nidhi Nahata

                    In this light-hearted video, chef Nidhi Nahata uses ChatGPT to create a unique recipe, blending AI with culinary arts. Will AI lead to a masterpiece or a kitchen disaster?

                    Key Takeaway: AI’s creative potential can extend to unexpected places, like the kitchen!

                    8. AI vs You: Who can write better Pick-Up lines? | Vox Pop

                    In this fun and interactive video, we explore AI’s impact on creativity. Can AI generate better pick-up lines than humans? This lighthearted experiment dives into AI’s role in shaping music, fashion, and humor.

                    Key Takeaway: AI’s creativity is expanding, challenging human ingenuity in unexpected ways.

                    9. OpenAI in India | Pragya Misra

                    Pragya Misra, Lead Public Policy at OpenAI India, discusses OpenAI’s progress towards AGI, hiring challenges in India, and Sam Altman’s vision for the region.

                    Key Takeaway: OpenAI is deeply invested in nurturing AI talent and innovation in India.

                    10. The ‘Woke’ Google Gemini Reaction

                    In this reaction video, AIM’s editorial team dissects Google’s controversial Gemini tool, highlighting the debate around AI ethics and representation.

                    Key Takeaway: AI ethics and accuracy remain central to AI development and deployment.


                    These top 10 videos showcase how AI is shaping industries, culture, and even our everyday lives. If you’d like to collaborate with us on video production, reach out to info@aimmediahouse.com.

                    Our video team is exceptional, delivering high-quality, engaging content that highlights the latest in AI innovation. Stay tuned for more engaging content in 2025!

                    ]]>
                    Top 12 AI Developments from IITs in 2025 https://analyticsindiamag.com/ai-trends/top-12-ai-developments-from-iits-in-2025/ Fri, 13 Dec 2024 11:26:24 +0000 https://analyticsindiamag.com/?p=10143507 These initiatives spanned a wide spectrum, including advancements in quantum imaging, semiconductor efficiency, multilingual AI capabilities, and Indic models and upskilling.]]>

                    As the conversation around AI for Bharat increased in 2024, the Indian Institutes of Technology (IITs) continued to solidify their position to make an impact using generative AI for the country.

                    From collaborations with industry giants like AMD, TCS, and Lightstorm to indigenous innovations such as IndicVoices and MedSumm, IITs showcased a remarkable blend of research, innovation, and real-world application.

                    These initiatives spanned a wide spectrum, including advancements in quantum imaging, semiconductor efficiency, multilingual AI capabilities, and Indic models and upskilling. These milestones reflect not just the technological prowess of IITs but their commitment to solving India’s pressing challenges through AI.

                    IIT Bombay’s MoU with Samsung R&D Institute

                    IIT Bombay collaborated with Samsung R&D to drive AI advancements and innovation in digital health and other key areas. The five-year partnership enables joint research projects, offering IIT Bombay students and faculty the chance to work closely with Samsung engineers. This collaboration fosters industry readiness among students while providing Samsung engineers with training and certification in cutting-edge technologies like AI and digital health.

                    Centre for Human-Centric AI at IIT Madras

                    The IIT Madras Pravartak Technologies Foundation launched the ‘Centre for Human-Centric Artificial Intelligence’ (CHAI), aiming to amplify human potential through AI. The Centre focuses on technology development, entrepreneurship, human resource enhancement, and fostering international collaborations, presenting a critical opportunity for India to lead in human-centric AI innovation.

                    AMD Partnered with IIT Bombay to Boost Semiconductor Startups

                    AMD partnered with the Society for Innovation and Entrepreneurship (SINE) at IIT Bombay to support semiconductor startups in India. As part of this collaboration, AMD provided grants to IIT Bombay-incubated startups working on energy-efficient Spiking Neural Network (SNN) chips. These startups focused on significantly reducing the energy consumption of conventional neural networks.

                    TCS Collaborated with IIT Bombay to Develop Quantum Diamond Microchip Imager

                    Tata Consultancy Services (TCS) formed a strategic partnership with IIT Bombay to create India’s first Quantum Diamond Microchip Imager. This advanced sensing tool aimed to enhance semiconductor chip precision, reduce failures, and improve the energy efficiency of electronic devices.

                    Over two years, TCS experts worked with Dr. Kasturi Saha, Associate Professor in IIT Bombay’s Department of Electrical Engineering, to develop the quantum imaging platform at the PQuest Lab. This platform promised better quality control for semiconductor chips, enhancing reliability and efficiency across industries.

                    Lightstorm and IIT Madras Launched Employment Skilling Initiative

                    Lightstorm, a leading connectivity infrastructure provider, signed an MoU with IIT Madras to launch an “Employment Skilling Initiative.” The program aimed to bridge skill gaps among underprivileged students and support youth, women, and job seekers from tier-2 and tier-3 cities.

                    In collaboration with IIT Madras’ Technology Innovation Hub, Lightstorm offered placement assistance for Arts, Science, and Commerce students, with high achievers securing internships. Dr. Mangala Sunder Krishnan, Professor Emeritus at IIT Madras, commended the initiative for promoting quality education and inclusive economic growth.

                    Sarvam AI and IIT Madras Released IndicVoices Dataset

                    Sarvam AI, AI4Bharat, and IIT Madras unveiled IndicVoices, a speech dataset encompassing 7,348 hours of natural and spontaneous speech from 16,237 speakers across 145 Indian districts and 22 languages. This diverse dataset included read (9%), extempore (74%), and conversational (17%) audio segments, with 1,639 hours already transcribed.

                    Using IndicVoices, they developed IndicASR, the first Automatic Speech Recognition (ASR) model supporting all 22 languages listed in the 8th Schedule of the Indian Constitution. Funded by BHASHINI (MeitY) and supported by grants from Nilekani Philanthropies and EkStep Foundation, the project set a global benchmark for multilingual data collection. All data, tools, and models were made publicly available.

                    IIT Patna Researchers Introduced MedSumm Framework

                    Researchers from IIT Patna presented MedSumm, a multimodal approach integrating Hindi-English codemixed medical queries with visual aids to enhance healthcare understanding. The MMCQS dataset, part of this work, included 3,015 multimodal medical queries with golden summaries in English, combining visual and textual data.

                    Meta’s Launch of Srijan and YuvAI at IIT Jodhpur

                    IIT Jodhpur’s “Srijan,” a pioneering Center for Generative AI developed with Meta and IndiaAI, has bolstered India’s AI ecosystem. Paired with the “YuvAI Initiative for Skilling and Capacity Building” in partnership with AICTE, these initiatives emphasise open-source AI, skill development, and research. Together, they aim to nurture talent capable of addressing national challenges through generative AI.

                    IAF-IIT Delhi Partnership

                    The Indian Air Force (IAF) and IIT Delhi joined forces through a MoU to innovate in aviation textiles. This collaboration focuses on obsolescence management, self-reliance, upgradation, and digitisation within aviation-grade textiles, promoting indigenisation and cutting-edge research.

                    NHA and IIT Kanpur’s Healthcare AI Initiative

                    IIT Kanpur and the National Health Authority (NHA) signed an MoU to advance AI-driven digital public goods for healthcare under the Ayushman Bharat Digital Mission (ABDM). This partnership includes developing a federated learning platform, open benchmarking tools for AI models, and a secure consent management system aiming to revolutionize India’s healthcare landscape.

                    Online AI Courses for School Students by IIT Madras

                    IIT Madras introduced two online certificate courses in data science, AI, and electronic systems for students in classes 11 and 12 as part of the IITM School Connect programme. These courses offer hands-on career exposure and prepare young learners for future opportunities in AI and technology.

                    IIT Delhi’s MoU with Honda on Cooperative Intelligence

                    IIT Delhi partnered with Honda Cars India Limited (HCIL) to advance Cooperative Intelligence (CI), an AI framework that enables seamless interaction between machines and humans. This collaboration focuses on enhancing mutual understanding through environmental awareness, scene recognition, and intent analysis to improve future mobility solutions.

                    ]]>
                    AI Stigma Holds Employees Back https://analyticsindiamag.com/ai-trends/ai-stigma-holds-employees-back/ Wed, 11 Dec 2024 07:00:00 +0000 https://analyticsindiamag.com/?p=10143251 For AIM, AI will always remain a tool, not a creator.]]>

                    We’ve all been there – sitting at our desks, nervously awaiting feedback from our editor, wondering if they can tell we used ChatGPT to polish our article. Even when it’s just used to refine our writing, a doubt creeps in: will they think we cheated? This worry doesn’t just haunt writers; graphic designers, coders, and even those perfecting PowerPoint presentations feel the same.

                    But is this fear of being judged for using AI even justified?

                    AI adoption in workplaces is undoubtedly a hot topic, with business leaders everywhere exploring its potential. The reality, however, seems to be different. According to the latest Slack Workforce Index by Salesforce, AI adoption among workers in the US barely increased in the last three months, moving up from 32% to just 33%. Compared to the 8% leap seen a year ago, it now seems the enthusiasm might be fading.

                    The report further explores the reason behind the stagnation. Nearly half of global desk workers admit they are uncomfortable telling their manager they’ve used AI. It’s not about rejecting technology but the stigma attached to using AI. Employees worry they will be seen as lazy, incompetent, or a cheat. So, instead of feeling empowered and confident, they’re apprehensive about using AI in the workplace. 

                    Source: Slack Report

                    Yet, there’s another side to the story. As per a report by The Washington Post, employees are embracing AI with open arms and witnessing rewards. These “super users” say AI has boosted their productivity and doubled their efficiency in tasks like strategic planning and project management. Some even use it to analyse data sets or screen job candidates, saving hours during their workday. 

                    However, it’s not all about optimised workflows. Experts warn that heavy reliance on AI comes with risks. Privacy concerns, inaccuracies, and even potential job losses loom large. And perhaps, most importantly, there’s the looming danger of workers losing touch with the very skills AI is meant to augment.

                    Further, the report highlighted that AI enablement is a factor in job searches. But is this trend particularly strong in India’s competitive job market?

                    In an exclusive interview with AIM, Christina Janzer, ​​SVP of Research and Analytics at Slack, said, “This trend is evident in the Indian market. In fact, according to the report, nine out of 10 desk workers in India consider a prospective employer’s ability to provide and support AI tools crucial in their job decisions. This shows the deep integration of AI into workforce expectations, driven by India’s rapidly evolving digital and tech ecosystem.”

                    India is emerging as a global leader in AI adoption. Notably, 61% of online Indian workers are already leveraging AI – a significant leap compared to the global average of 40%. This enthusiasm, coupled with 80% of Indian workers expressing excitement about AI, underscores the country’s progressive approach. 

                    Janzer attributed this to two key factors: India’s strong IT talent pool and a forward-thinking mindset that views AI as a collaborator rather than a replacement. This openness further aligns with India’s digital-first vision, where innovation and experimentation are highly valued. Indian companies are leveraging AI to boost productivity and enhance the employee experience, setting a blueprint for global businesses aiming to unlock AI’s potential.

                    So, this leaves companies to come up with guidelines for the use of AI at work.

                    Companies Who Don’t Like AI

                    With the rise of generative AI tools like ChatGPT, global companies have redefined the use of tech in the workplace. According to a 2023 study by BlackBerry, three out of four organisations worldwide have either considered or implemented bans on GenAI applications at work. The survey, which spanned over 2,000 IT decision-makers from the US, Canada, Europe, Japan, and Australia, highlighted growing concerns over the implications of these technologies. 

                    Interestingly, 61% of those enforcing such bans see this as long-term or permanent restrictions. 

                    For example, in the publishing sector, Medium.com took a stance in May 2024 by barring AI-generated content from its Partner Program. As of May 1, stories generated or edited by AI are ineligible for a paywall, emphasising the platform’s commitment to authenticity.

                    Meanwhile, Wired has declared a complete embargo on publishing AI-generated or AI-edited text. However, they don’t shy away from leveraging AI for brainstorming ideas, generating headlines, or conducting research.

                    Salesforce has gone a step further with a comprehensive Acceptable Use Policy for AI. Their guidelines explicitly prohibit AI use in areas like professional advice and critical decision-making, ensuring ethical boundaries remain intact.

                    Even in journalism, boundaries are being set. The BBC now enforces editorial policies restricting the use of generative AI in news production. AI is permitted only when it serves an illustrative purpose or becomes the subject of the content itself, emphasising human-driven storytelling in factual journalism.

                    At AIM, we stand at the intersection of tradition and innovation. While we embrace AI to enhance and fine-tune our work, we draw the line at allowing it to replace the human essence in content creation or imagery. For us, AI will always remain a tool, not a creator.

                    Indian companies like Zomato, Razorpay, Zepto, and Schbang are guiding employees on how to effectively use AI. Janzer noted that leaders play a pivotal role in accelerating AI adoption by setting clear expectations, creating opportunities for experimentation, and fostering a culture of shared learning.

                    What’s Next?

                    There is uncertainty around the usage of AI in the workplace, but workers look forward to upskilling. The Slack report mentioned that 76% of employees feel an urgency to become an AI expert, but 61% spent less than five hours learning the same, and 30% noted they have had no AI training. 

                    This highlights a need for employers to address the issue of bridging the gap in training and clarifying AI guidelines. This is the need of the hour, as current employees and new professionals entering the workforce have a better chance of surviving in a more supportive workplace environment.

                    ]]>
                    2024’s Biggest AI Companies Mergers and Acquisitions https://analyticsindiamag.com/ai-trends/who-bought-what-2024s-biggest-ai-mergers-and-acquisitions/ Wed, 11 Dec 2024 06:11:49 +0000 https://analyticsindiamag.com/?p=10143240 The growth in AI deals for 2024 is expected to increase by 32% compared to 2023.]]>

                    With 271 deals finalised, 2023 was a good year for AI mergers and acquisitions. Building on that momentum, 2024 is projected to close with 326 AI deals, marking a 20% year-over-year increase. According to Aventis Advisors, the growth in AI deals for 2024 is expected to see 32% growth compared to 2023.

                    The trend in 2024 AI acquisitions centred on expanding cloud infrastructure, streamlining data management and optimisation, growing generative and sector-specific AI, attracting AI talent, and building cross-platform AI solutions.

                    Here’s a look at the top mergers and acquisitions that made headlines this year.

                    1. AMD x Silo AI 

                    AMD acquired Silo AI, Europe’s largest private AI lab, in a $665 million all-cash deal. Founded in 2017 and based in Helsinki, Finland, Silo AI specialises in creating AI models, platforms, and solutions for leading enterprises across industries. Its clients include Allianz, Philips, Rolls-Royce, and Unilever. The company has also played a key role in developing open-source multilingual LLMs like Poro and Viking, optimised for AMD platforms.

                    2. Databricks x Tabular 

                    Databricks announced its acquisition of Tabular on June 4, 2024, in a deal valued over $1 billion. Tabular, founded in 2021 by Ryan Blue, Daniel Weeks, and Jason Reid, is a data management company specialising in Apache Iceberg, an open-source table format for large analytics datasets. The acquisition brings together the creators of Apache Iceberg and Linux Foundation Delta Lake, the two leading open-source lakehouse formats.

                    In 2024, Databricks acquired several companies to expand its data and AI capabilities. In January, it acquired Einblick, a data science and AI startup. Lilac, an AI startup working on improving data quality for generative AI and LLMs, was acquired in the same month. 

                    Databricks also acquired Prodvana, a cloud-native infrastructure management startup, in July 2024 to strengthen its cloud capabilities.

                    3. NVIDIA x Run:ai 

                    In April 2024, NVIDIA acquired Run:ai, an Israeli firm specialising in Kubernetes-powered AI/ML workflow orchestration, in a deal valued at $700 million. Run:ai’s platform helps enterprise clients optimise and manage their compute infrastructure across on-premises, cloud, and hybrid environments. It supports multiple Kubernetes variants and integrates with third-party AI tools and frameworks. Run:ai serves large enterprises in various industries, using its platform to manage GPU clusters at a data-center scale. 

                    NVIDIA also acquired OctoAI, a Seattle-based generative AI startup, for $250 million, as well as Brev.dev, a San Francisco-based startup specialising in AI and machine learning development platforms.

                    4. Snowflake X Datavolo

                    On November 20, 2024, Snowflake confirmed its acquisition of Datavolo, a move to boost its data management and pipeline automation capabilities. Founded in 2023 by Joseph Witt and Luke Roquet, Datavolo automates multimodal data pipelines for AI, utilising Apache NiFi to optimise data flow between enterprise sources.

                    5.Canva X Leonardo.AI 

                    In July 2024, Canva acquired Leonardo.AI, an Australian generative AI startup founded in 2022. The company’s platform, which enables users to create images and videos, saw rapid growth, amassing 19 million users and generating over a billion images in just 18 months. While the deal’s terms weren’t disclosed, it surpassed Leonardo.AI’s prior $80 million valuation.

                    6.OpenAI X Rockset 

                    OpenAI acquired Rockset on June 21, 2024. Founded in 2016 by former Facebook engineers Venkat Venkataramani and Tudor Bosman, along with database architect Dhruba Borthakur, Rockset focuses on real-time analytics and search database technology. The company had raised over $117.5 million in funding before the acquisition.

                    This acquisition strengthens OpenAI’s retrieval infrastructure by incorporating Rockset’s technology to enhance data usage and provide real-time insights across AI products. The Rockset team has integrated into OpenAI, with current customers transitioning off the platform.

                    7. IBM x Hashicorp 

                    IBM announced its plan to acquire HashiCorp on April 24, 2024, for $35 per share in cash, valuing the deal at $6.4 billion. HashiCorp, a leader in multi-cloud infrastructure automation, provides solutions for infrastructure lifecycle management and security lifecycle management in hybrid and multi-cloud environments.

                    With HashiCorp’s tools like Terraform and Vault, IBM plans to enhance its hybrid cloud platform, improving infrastructure management and security, while positioning itself to capture a larger share of the $1.1 trillion cloud market.

                    8. Salesforce x Tenyx 

                    Salesforce announced its acquisition of Tenyx, an AI voice agent company based in California, on September 3, 2024. Founded in 2022, Tenyx serves industries like e-commerce and healthcare. 

                    9. Microsoft x Inflection

                    In March 2024, Microsoft struck a $620 million deal with Inflection AI, securing non-exclusive rights to offer Inflection’s AI model via Azure Cloud for several years. An additional $33 million was allocated to waive off claims tied to hiring Inflection employees, bringing the total value of the agreement—including executive compensation—to over $1 billion.

                    The deal also saw Microsoft hire Inflection’s co-founders, Mustafa Suleyman and Karén Simonyan, along with approximately 70 other employees. Suleyman assumed the role of CEO of Microsoft’s new consumer AI division, overseeing products such as Copilot, Bing, and Edge. 

                    10. Amazon x Adept 

                    In June 2024, Amazon reached a deal with AI startup Adept, bringing in key executives and licensing its technology. As part of the agreement, Amazon hired Adept’s co-founder and CEO, David Luan, along with other leaders and about two-thirds of Adept’s employees. Luan leads Amazon’s new AGI Autonomy division, reporting to Rohit Prasad, who oversees artificial general intelligence at Amazon.

                    11. Yotta x IndiQus Technologies

                    In November 2024, Yotta Data Services acquired IndiQus Technologies, the parent firm of Apiculus, to expand its cloud and AI capabilities.

                    Through this deal, Yotta expands its portfolio in cloud and AI services, laying the groundwork for an AI-focused cloud platform. IndiQus founders Sunando Bhattacharya and KB Shiv Kumar joined Yotta as chief revenue officer and chief innovation officer, respectively. 

                    12. Thomson Reuters x Materia 

                    Thomson Reuters acquired Materia, a US-based startup that develops AI assistants for tax, audit, and accounting professionals, on October 22, 2024. Founded in 2022, Materia’s platform automates research and workflows, simplifying tasks for accountants.

                    The acquisition aligns with Thomson Reuters’ AI strategy and will integrate Materia’s technology into its portfolio to deliver generative AI tools and assistants. Financial details of the deal were not disclosed.

                    13. HPE x Juniper 

                    On January 9, 2024, HPE acquired Juniper Networks in a deal worth $14 billion, paying $40 per share—32% more than Juniper’s stock price at the close of trading on January 8, 2024. This acquisition was set to accelerate HPE’s ambitions in the networking space, with the merger expected to boost the networking segment’s share of HPE’s total revenue from 18% to 31%. Even more striking, networking now contributes over 56% to HPE’s operating income.

                    14. AMD x ZT Systems 

                    AMD announced that it had acquired ZT Systems for $4.9 billion on August 19, 2024. The deal consisted of 75% cash and 25% stock, with a potential additional payment of up to $400 million based on performance targets.

                    Founded in 1994 and headquartered in Secaucus, New Jersey, ZT Systems specialises in compute design and infrastructure for AI, cloud, and general-purpose computing. With a strong track record of providing essential computing and storage solutions for major cloud providers, the company generates approximately $10 billion in annual revenue.

                    ]]>
                    Sundar Pichai, Elon Musk Dream of Building Quantum Clusters in Space https://analyticsindiamag.com/ai-trends/sundar-pichai-elon-musk-dream-of-building-quantum-clusters-in-space/ Tue, 10 Dec 2024 10:55:56 +0000 https://analyticsindiamag.com/?p=10143173 “We should do a quantum cluster in space with Starship one day,” says Sundar Pichai referring to Elon Musk.]]>

                    The unveiling of Google’s quantum chip, Willow, not only marked a breakthrough in computation but also sparked a visionary exchange between Google CEO Sundar Pichai and SpaceX’s Elon Musk. Pichai suggested, “We should do a quantum cluster in space with Starship one day,” linking quantum advancements with Musk’s interstellar ambitions. 

                    Musk responded confidently, “That will probably happen. Any self-respecting civilisation should at least reach Kardashev Type II,” hinting at humanity’s potential to harness galactic energy.

                    A Quantum Leap Beyond Classical Computing

                    Google’s Willow chip shattered the limitations of classical computing, completing a computation in under five minutes that would take today’s fastest supercomputer 10 septillion years. 

                    Using 105 qubits and exponential error correction, Willow solves challenges that have hindered quantum computing for decades. “This is a breakthrough,” Pichai tweeted, celebrating the chip’s historic achievement, breaking a 30-year challenge in the field.

                    According to Julian Kelly, director of hardware at Google Quantum AI, “Willow achieves quantum coherence times of 100 microseconds—five times longer than its predecessor Sycamore—while delivering performance breakthroughs in error correction and computation scalability.”

                    By achieving quantum error rates below critical thresholds and outpacing the world’s fastest supercomputers in certain benchmarks, Willow showcases the growing chasm between classical and quantum computing for complex tasks.

                    Ripple Effects and Industry Buzz

                    The announcement reverberated across industries, drawing reactions from tech leaders and the public. Musk’s reference to solar-powered deserts and humanity’s low Kardashev scale ranking contextualised the enormity of Willow’s leap. 

                    Meanwhile, experts like John Preskill called Willow’s performance a defining moment: “The hardware has reached a stage where it can advance science in ways classical systems simply cannot.”

                    Cryptography experts flagged potential disruptions, as quantum systems may soon unravel classical encryption methods. On the practical side, Willow’s capabilities promise revolutionary applications in AI, drug discovery, and energy optimisation — fields constrained by classical systems.

                    Microsoft also bets big on quantum. The company is making significant strides in quantum computing through a partnership with Atom Computing. The collaboration recently achieved a world record by entangling 24 logical qubits, with plans to launch a commercial quantum computer by 2025. 

                    “With 100 reliable qubits, we achieve scientific advantage,” said Microsoft CEO Satya Nadella, emphasising the transformative potential of fault-tolerant quantum systems.

                    Microsoft’s approach focuses on integrating quantum technology into its Azure platform, combining quantum and classical computing to address challenges in materials science, climate modeling, and drug discovery. Nadella highlighted the stakes, noting, “The foundation we’re building now will determine the leaders of tomorrow.”

                    IBM, on the other hand, is positioning itself as a key player in the quantum race with its IBM Quantum System Two, described as “the building block of creating quantum supercomputers.” 

                    These systems are already being deployed in countries like Japan, South Korea, and Germany. IBM’s director of research emphasised their unique strength: “Quantum computers allow us to simulate nature in ways classical systems cannot,” enabling breakthroughs in materials science, chemistry, and industrial processes.

                    IBM CEO Arvind Krishna sees quantum as the next major frontier for the company, calling it “our big bet for the future.” He highlighted the importance of integrating quantum with AI and cloud technologies, stating, “IBM will become a hybrid cloud, AI, and quantum company as the technology matures.”

                    Focusing on hybrid quantum-classical computing: A few days back, AWS partnered with NVIDIA to push the boundaries of hybrid quantum-classical computing with the integration of NVIDIA’s CUDA-Q platform into Amazon Braket. This collaboration enables researchers to develop and test quantum-classical workflows using GPU-accelerated simulators within Braket’s managed environment. Stefan Natu of AWS explained that the integration includes a pulse-level programming interface, initially mapped to QuEra’s hardware, marking a significant step in Braket’s evolution as a unified platform for quantum innovation.

                    This partnership addresses the rising demand for classical compute resources essential for quantum tasks such as error correction and circuit simulation. Braket’s GPU-based simulations have demonstrated up to 350x speed improvements over CPU-based alternatives, streamlining the testing and deployment of quantum algorithms. Looking forward, AWS and NVIDIA are targeting ultra-low latency co-processing and AI-enabled quantum simulations, paving the way for quantum-accelerated supercomputing.

                    What’s Next for Google in Quantum Computing? 

                    Building on the success of Willow, Google’s roadmap aims to create a quantum computer with a thousand well-protected logical qubits — equivalent to roughly a million physical qubits. This ambitious plan is part of a six-milestone strategy that Google Quantum AI has been pursuing since its breakthrough in 2019. 

                    “We are now approaching the third milestone, about halfway through our roadmap,” said Hartmut Neven, the founder of Google Quantum AI.

                    The next challenge lies in refining error correction further, an area critical for achieving fault-tolerant quantum systems. 

                    “There is always this competition between errors, classical systems, and the quantum machine,” Neven explained. “If you’re going to win, you have to fight off both the noise and the classical machine.” 

                    Progress so far has been guided by Neven’s Law, a principle describing the double exponential increase in computational power as qubit quality improves and error rates decline.

                    Google is also focusing on moving from benchmarks like random circuit sampling (RCS), which demonstrates computational supremacy, to solving practical, commercially valuable problems. “The next step is to train this enormous compute power towards a task that people on Main Street would care for,” said John Preskill, theoretical physicist and long-time quantum computing advocate.

                    Neven remains optimistic about the timeline: “Early commercial applications could arrive in half a decade or a few years rather than multiple decades.” With these developments, Google’s vision for a quantum-first future is rapidly becoming reality, paving the way for transformative applications in science, technology, and beyond.

                    ]]>
                    Top 25 Tech Influencers In India https://analyticsindiamag.com/ai-trends/top-25-tech-influencers-in-india/ Mon, 09 Dec 2024 13:02:42 +0000 https://analyticsindiamag.com/?p=10142761 Here’s a curated list of the top influencers to follow if you're looking to get started in the world of computer science and AI.]]>

                    In recent years, India has seen an uptick in people showing deep interest in computer engineering and related fields. This has paved the way for influencers who simplify complex concepts, offer practical career advice, and guide viewers from basics to advanced topics. 

                    Here’s a curated list of the top influencers to follow if you’re looking to get started in the world of computer science and AI. This list, compiled by AIM, is based on their popularity and subscriber count.

                    Top 25 youtubers for coding
                    MLDS 2025

                    1. Haris Ali Khan (Code with Harry)

                    Code with harry

                    YT Channel: CodeWithHarry

                    Subscribers: 6.9M

                    Haris Ali Khan, the man behind the channel ‘Code with Harry’, is the biggest individual coding influencer on YouTube, with 6.87M subscribers at the time of writing this article. Harry’s mastery lies in creating comprehensive, beginner-friendly programming tutorials in Hindi.

                    2. Navin Reddy (Telusko)

                    Navin Reddy

                    YT Channel: Telusko

                    Subscribers: 2.47M

                    Navin runs a YouTube channel named Telusko, which provides tutorials on Java, Python, and various frameworks. The channel helps tech learners by providing an in-depth, conceptual understanding of programming languages and their practical applications. Its content supports both beginners and intermediate developers in mastering essential programming skills.

                    3. Jenny’s Lectures

                    Jenny's Lectures

                    YT Channel: Jenny’s Lectures

                    Subscribers: 1.79M

                    Jenny’s expertise lies in explaining theoretical computer science concepts with exceptional clarity. She’s supporting tech students by providing in-depth coverage of university-level computer science topics, data structures, and algorithms. Her methodical approach helps learners build a strong foundation in core CS principles, which is crucial for both academic success and technical interviews.

                    4. Akshay Saini

                    Akshay Saini

                    YT Channel: Akshay Saini

                    Subscribers: 1.74M

                    Akshay’s mastery is in deep JavaScript concepts and their practical applications. Through his popular ‘Namaste JavaScript’ series, he assists front-end developers in understanding the nuances of JavaScript. His content helps developers write more efficient, bug-free code and prepares them for advanced front-end roles.

                    5. Abdul Bari

                    Abdul Bari

                    YT Channel: Abdul Bari

                    Subscribers: 1.07M

                    Abdul Bari is a master at breaking down complex algorithmic concepts. He’s aiding aspiring software engineers by providing crystal-clear explanations of advanced algorithms and their analysis. His content is particularly valuable for students preparing for competitive programming contests and those aiming for positions at top tech companies known for rigorous algorithmic interviews.

                    6. Krish Naik

                    Krish Naik

                    YT Channel: Krish Naik

                    Subscribers: 1.07M

                    Krish’s mastery lies in machine learning and AI education. He’s supporting data science enthusiasts by providing up-to-date information on industry trends and practical implementation of ML/AI concepts. 

                    7. Hitesh Choudhary

                    Hitesh Choudhary

                    YT Channel: Hitesh Choudhary

                    Subscribers: 964K

                    Hitesh’s forte lies in modern web development technologies and frameworks. He’s empowering the tech youth by creating project-based courses that simulate real-world development scenarios. His hands-on approach in teaching React, Node.js, and other technologies helps students build a strong portfolio, enhancing their employability in the competitive tech industry.

                    8. Raj Vikramaditya (take U forward)

                    Raj from Take u forward

                    YT Channel: take U forward

                    Subscribers: 721k

                    Raj is a software engineer at Google and also runs a YouTube channel called ‘take U forward,’ where he shares the art of DSA interview preparation. He helps job seekers crack technical interviews at top tech companies by providing structured learning paths, problem-solving strategies, and mock interview scenarios. 

                    9. Kunal Kushwaha

                    Kunal Kushwaha

                    YT Channel: Kunal Kushwaha

                    Subscribers: 707k

                    Kunal’s expertise lies in cloud-native technologies and open-source contributions. As a CNCF Ambassador, he’s introducing young developers to the world of DevOps and cloud computing. His content helps tech enthusiasts contribute to open-source projects, build their portfolios, and network within the global tech community.

                    10. Love Babbar

                    Love Babbar

                    YT Channel: Love Babbar

                    Subscribers: 614K

                    Love’s expertise is in structuring DSA learning for placement preparation. His famous ‘450 DSA Questions’ sheet has become a go-to resource for interview preparation. He’s helping tech graduates secure positions in top companies by providing a structured approach to mastering data structures and algorithms, coupled with invaluable insights into the interview process.

                    MLDS 2025

                    11. Gaurav Sen

                    Gaurav Sen

                    YT Channel: Gaurav Sen

                    Subscribers: 601k

                    Gaurav’s expertise lies in system design and large-scale distributed systems. He’s helping experienced developers and tech leads level up their skills by demystifying how tech giants design their systems. His content bridges the gap between coding and architecture, enabling young professionals to transition into senior technical roles.

                    12. Anuj Kumar (Anuj Bhaiya)

                    anuj.kumar.sharma

                    YT Channel: Anuj Bhaiya

                    Subscribers: 504k

                    Anuj excels at combining technical education with practical career guidance. Through his channel, he’s supporting tech students by providing a holistic view of the industry, covering everything from coding basics to career navigation. His content helps learners make informed decisions about their tech careers and prepare effectively for the job market.

                    13. Ajay Suneja (Technical Suneja)

                    Ajay Suneja

                    YT Channel: Technical Suneja

                    Subscribers: 499k

                    Ajay Suneja is the face behind the Technical Suneja channel where he guides his viewers through diverse programming languages. He helps tech learners by covering a wide range of topics from web development to data structures. 

                    14. Shradha Khapra

                    Shradha Khapra

                    YT Channel: Shradha Khapra

                    Subscribers: 445k

                    Shradha, also known as Microsoft Wali Didi, excels in bridging the gap between academic knowledge and industry requirements. Her content helps young techies by providing a holistic view of software development careers, combining DSA (Data Structure Algorithm)knowledge with practical coding skills and invaluable placement advice. Her experience at Microsoft lends credibility to her career guidance, inspiring many young women to pursue tech careers.

                    15. Mehul – Codedamn

                    Mehul - Codedamn

                    YT Channel: Mehul – Codedamn

                    Subscribers: 437k

                    Mehul excels at creating interactive coding tutorials, particularly in JavaScript. He’s supporting learners by providing hands-on coding experiences through his platform. Apart from coding, he also explains some interesting topics like the tech stack of specific companies and how it works. 

                    16. Sagar Chouksey

                    Sagar Chouksey

                    YT Channel: Sagar Chouksey

                    Subscribers: 305k

                    Founder of Coding Wise, Sagar specialises in Python programming and AI engineering education. His channel focuses on practical, project-based learning with an emphasis on front-end development and programming fundamentals.

                    17. Tanay Pratap

                    Tanay Pratap

                    YT Channel: Tanay Pratap

                    Subscribers: 284k

                    Tanay’s strength lies in practical web development education and career guidance. Drawing from his experience at Microsoft, he’s helping young developers by focusing on industry-relevant skills and practices.

                    18. Nitish Singh (CampusX)

                    Nitish Singh (CampusX)

                    YT Channel: CampusX

                    Subscribers: 263k

                    CampusX, founded by Nitish Singh, specialises in data science education. They’re helping aspiring data scientists by providing comprehensive courses on machine learning, deep learning, and data analysis. 

                    19. Aditya Verma

                    Aditya Verma

                    YT Channel: Aditya Verma

                    Subscribers: 255k

                    Aditya’s expertise lies in teaching dynamic programming and recursive problem-solving techniques. He’s assisting competitive programmers and interview candidates by breaking down complex algorithmic problems into understandable components. 

                    20. Arsh Goyal

                    Arsh Goyal

                    YT Channel: Arsh Goyal

                    Subscribers: 245k

                    As a senior software engineer at Samsung and former educator at CodeChef-Unacademy, Arsh has established himself as a prominent voice in tech education. His expertise is particularly valuable as he combines his academic excellence (institute gold medalist from NIT Jalandhar) with real industry experience.

                    MLDS 2025

                    21. Sumeet Malik (Pepcoding)

                    Sumeet Malik (Pepcoding)

                    YT Channel: Pepcoding

                    Subscribers: 205k

                    Sumeet Malik is the face behind the YouTube channel Pepcoding, where he supports tech learners by offering structured courses in DSA and web development. His focus is on creating lifelong learners and helping students develop a strong foundation. 

                    22. Piyush Garg

                    Piyush Garg

                    YT Channel: Piyush Garg

                    Subscribers: 194k

                    Piyush specialises in software development best practices and industry insights. He’s supporting young developers by sharing practical coding tips, architectural decisions, and career advice. His content helps bridge the gap between academic knowledge and industry requirements. 

                    23. Vishwajeet (Tech Gun)

                    Vishwajeet (Tech Gun)

                    YT Channel: Tech Gun

                    Subscribers: 191k

                    Vishwajeet, who runs a YouTube channel called Tech Gun, provides detailed Hindi programming tutorials. He is supporting Hindi-speaking tech enthusiasts by offering lengthy, comprehensive videos that cover programming concepts in depth. 

                    24. Arpit Bhayani

                    Arpit Bhayani

                    YT Channel: Arpit Bhayani

                    Subscribers: 134k

                    Arpit’s expertise lies in system design and engineering fundamentals. Through his channel, he’s helping experienced developers understand the intricacies of large-scale system architecture.

                    25. Ayush Singh

                    Ayush Singh

                    YT Channel: Ayush Singh

                    Subscribers: 86.1k

                    At just 14 years of age, Ayush made remarkable contributions to the machine-learning community. His ML001 course on FreeCodeCamp has garnered nearly 800,000 views and was even recommended by MIT. He’s helping aspiring data scientists by breaking down complex ML concepts into understandable segments.

                    ]]>
                    2024: The Year English Changed the Coding Game Forever https://analyticsindiamag.com/ai-trends/2024-the-year-english-changed-the-coding-game-forever/ Fri, 06 Dec 2024 10:34:29 +0000 https://analyticsindiamag.com/?p=10142612 This transformation, driven by LLMs like ChatGPT, allows users to interact with complex systems using natural language, making technology accessible to everyone. ]]>

                    Traditionally, coding was the bastion of the select few who had mastered mighty languages like C++, Python, or Java. The idea of programming seemed exclusively reserved for those fluent in syntax and logic. However, the narrative is now being challenged by natural language coding being implemented in AI tools like GitHub Copilot. 

                    Andrej Karpathy, senior director of AI at Tesla predicted this trend last year.

                    But what if you could code by simply telling the computer what you wanted in plain, simple English? This is no longer hypothetical; English is emerging as the universal coding language.

                    Voices Leading the Change

                    NVIDIA CEO Jensen Huang believes that English is becoming a new programming language thanks to AI advancements. Speaking at the World Government Summit, Huang explained, “It is our job to create computing technology such that nobody has to program and that the programming language is human.” 

                    This transformation, driven by large language models like ChatGPT, allows users to interact with complex systems using natural language, making technology accessible to everyone. He calls this a “miracle of AI,” emphasising how it closes the technology divide and empowers people from all fields to become effective technologists without traditional coding skills.

                    This shift represents a profound democratisation of programming. No longer is the power to create software restricted to those who can decipher programming languages. Anyone with a problem to solve and a clear enough articulation of that problem can now write software.

                    “In the future, you will tell the computer what you want, and it will do it,”​ Huang commented. Large language models (LLMs) like OpenAI’s GPT-4 and its successors have made this possible. These models are capable of understanding complex human language, translating it into executable code, and even iterating on that code based on feedback.

                    Microsoft CEO Satya Nadella has been equally vocal about the potential of English for coding. Microsoft’s GitHub Copilot, an AI code assistant, enables developers to describe their needs in natural language and receive functional code in response. Nadella describes this as part of a broader mission to “empower every person and every organisation on the planet to achieve more.”

                    The Paradigm Shift

                    Generative AI is transforming software development by enabling natural language prompts to generate code, reducing the need for traditional programming skills. Tools like Cursor AI and GitHub Copilot exemplify this shift, allowing developers or even non-developers to build applications by describing tasks in plain English. 

                    These systems provide real-time code suggestions and streamline debugging processes, making IDEs more accessible and efficient. 

                    However, while these tools can handle routine coding tasks, experts argue that complex, large-scale software still benefits from traditional coding environments for greater control and precision​.

                    In a discussion earlier last year, Stability AI CEO Emad Mostaque claimed, “41% of codes on GitHub are AI-generated.” 

                    Similarly, data scientists using platforms like Apache Spark’s English SDK can perform complex data analysis without writing a single line of traditional code. They can simply instruct the system in English, asking for insights, charts, or models, and the system delivers. 

                    Now, with tools like Copilot and NVIDIA’s AI frameworks, even non-technical professionals can describe their app’s features in English, and let the AI generate the necessary code. The process, once cumbersome and costly, becomes streamlined and accessible.

                    While English as a coding language lowers the barrier to entry, it doesn’t eliminate the need for skill. Here, the art of prompt engineering—crafting precise and effective instructions for AI—becomes crucial. As Huang puts it, “There is an artistry to prompt engineering. It’s how you fine-tune the instructions to get exactly what you want”

                    In 2024, the ability to program is no longer reserved for a few. It’s a skill anyone can wield, thanks to the power of natural language processing and AI. So, whether you’re a seasoned developer or someone who’s never written a line of code, the future invites you to participate, innovate, and create. English is no longer just a global language for communication, it’s the new language of innovation.

                    The question now isn’t whether you can learn to code. It’s: What will you build next?

                    ]]>
                    12 Days of OpenAI – The Countdown Begins https://analyticsindiamag.com/ai-trends/12-days-of-openai-the-countdown-begins/ Thu, 05 Dec 2024 11:40:55 +0000 https://analyticsindiamag.com/?p=10142501 With ChatGPT hitting over 300 million weekly active users, speculation about what to expect from OpenAI next has been rife.]]>

                    It’s celebration time at OpenAI! Close on the heels of ChatGPT’s second birthday, the AI powerhouse has announced plans to release new models and features over the next twelve days. “12 days. 12 livestreams. A bunch of new things, big and small. 12 Days of OpenAI starts tomorrow,” the company shared on its official X account.

                    🎄🧑‍🎄 Starting tomorrow at 10 am Pacific, we are doing 12 Days of OpenAI. Each weekday, we will have a livestream with a launch or demo, some big ones and some stocking stuffers. We’ve got some great stuff to share, hope you enjoy! Merry Christmas,” posted OpenAI chief Sam Altman on X.

                    With ChatGPT hitting over 300 million weekly active users, speculation about what to expect from OpenAI next has been rife. The company plans to grow this figure nearly fourfold, reaching a billion users over the next year. According to OpenAI, over one billion user messages are sent on ChatGPT every day and 1.3 million developers have built on OpenAI in the US.

                    OpenAI Christmas Announcements: What to Expect

                    1. o1 Full Release

                    According to a recent report, OpenAI is likely to release the complete o1 reasoning model, which will be multimodal. In a recent interview, Altman described the company’s latest AI model, o1, as being at the ‘GPT-2 stage’ of reasoning development. While the model is still early in its development, Altman said that significant improvements are expected in the coming months.

                    Altman also mentioned that users will notice rapid improvements in o1 as OpenAI transitions from the o1-preview model to the full release. 

                    2. Sora 

                    OpenAI announced Sora earlier this year but hasn’t made it publicly available yet. However, OpenAI gave early access to visual artists, designers, creative directors, and filmmakers, even engaging with Hollywood. Recently, the Sora API was leaked on Hugging Face and made available for early testing by some artists.

                    However, shortly after, the Hugging Face page experienced a 502 error due to high traffic. The company became aware of the incident and quickly shut down access.

                    3. New Voices (including Santa)

                    According to rumours, OpenAI is working on a new Christmas theme for Advanced Voice Mode. The voice mode icons will feature snowflake animations along with a new Santa voice. OpenAI is expected to add more voices and introduce a feature where users can generate voices as per their needs. 

                    4. AI Agent 

                    OpenAI is set to launch an AI agent called Operator in January 2025. The tool will perform tasks like writing code, booking travel, and automating routine activities in web browsers. OpenAI may preview it during their festive series live stream.

                    5. Web Browser 

                    OpenAI could also give us a glimpse of its web browser. Darin Fisher, a key creator of Google Chrome and former Google VP of engineering, joined OpenAI as a technical staff member in November 2024. 

                    The product is intended to allow websites to engage with visitors in new ways, enabling conversations similar to those users have with ChatGPT. For example, a user on a clothing retailer’s site could ask for coat recommendations for an upcoming trip, while someone on a cooking site like Bon Appétit could inquire about dishes that pair with tikka masala.

                    6. Desktop App

                    OpenAI recently updated its desktop app. This early beta update allows ChatGPT to examine coding apps and provide better answers for Plus and Team users. It assists with coding in apps like VS Code, Xcode, Terminal, and iTerm2, and also includes a voice assist feature to interact with users. The update lets users take screenshots, upload files, and search the web through SearchGPT. 

                    7. New AI Device

                    Earlier this year, Jony Ive, Apple’s former design chief, confirmed that he’s working with OpenAI CEO Sam Altman on a secret AI hardware project. The two have been collaborating for about a year to create a device that uses generative AI to manage complex tasks more effectively than traditional software.

                    While the details of the device are still unknown, the goal is to develop an AI product that could change how users interact with AI. The design is being worked on in a newly acquired office in San Francisco, with plans for significant funding by the end of 2024.

                    8. Vision Fine-Tuning Upgrades

                    OpenAI is likely to improve GPT-4o’s ability to understand and analyse images. Developers may be able to customise the model for tasks like medical image analysis, satellite image interpretation, or art style recognition. Improvements could include better object detection, a stronger understanding of complex visuals, and better integration of images with text. This would make image-based AI applications more accurate and useful in different industries.

                    9. AI-Generated Music

                    OpenAI may unveil a feature to generate music directly from text prompts, similar to existing AI tools like Jukedeck. By analysing musical patterns and structures, this feature could allow users to create original compositions based on their specific preferences—whether for personal enjoyment or commercial use. This would open up new possibilities for content creators, filmmakers, and advertisers who need unique soundtracks.

                    10. Real-Time Collaboration Features

                    OpenAI could introduce collaborative features that allow users to work on projects in ChatGPT in real time. These features would enable users to collaborate easily, whether in writing, coding, or brainstorming sessions. This could be especially valuable for remote teams, as the AI would act as a real-time collaborator, suggesting ideas, making corrections, or solving problems as the team works. 

                    Notably, OpenAI acquired Multi to develop the ChatGPT desktop app. Multi allows users to share applications across workspaces, enabling teams to code, edit, and collaborate as if they were side by side, regardless of their physical location.

                    11. AI Chips 

                    OpenAI might unveil new AI chips or collaborate with chip manufacturers like TSMC or Broadcom. These chips would be specifically built to optimise AI model processing, providing faster, more efficient task execution. 

                    12. GPT-5 

                    Finally, the one we’re all waiting for. OpenAI could release GPT-5, trained using synthetic data from the new o1-series reasoning models. Earlier this year, in a podcast, Altman also spoke with Bill Gates about how GPT-5 would focus on customisation and personalisation. “The ability to know about you, your email, your calendar, how you like appointments booked, and connect to other data sources—all of that. These will be some of the most important areas of improvement,” Altman said.

                    Furthermore, Altman claimed that GPT-5 would have much better reasoning capabilities. “GPT-4 can reason in only extremely limited ways. Reliability is also a concern. If you ask GPT-4 most questions 10,000 times, one of those answers might be pretty good, but it doesn’t always know which one. You’d want the best response from those 10,000 each time,” said Altman.

                    ]]>
                    AIM Predictions 2025 https://analyticsindiamag.com/ai-trends/aim-predictions-2025/ Wed, 04 Dec 2024 07:22:28 +0000 https://analyticsindiamag.com/?p=10142434 Wild and true, AI predictions for 2025 may not seem too surprising.]]>

                    Another whirlwind year in AI is coming to an end, marked by nothing less than pure AI madness. Last year, when AIM made bold AI predictions for 2024, we were pretty aligned with what unfolded. From calling 2024 the year of small language models and open-source AI models levelling up with the closed-source ones, our predictions have been on point. 

                    En route to 2025 and going by the wave, the AI predictions for the coming year are sure to cause an impact at a larger scale. It is likely that physical AI will appeal at a consumer/personal level, with humanoids probably making their way into people’s homes. 

                    Wild? Absolutely. Plausible? Maybe. So, let’s dive deep into some of the wildest AI predictions for 2025.

                    Agentic AI

                    No points for guessing, but going by the trend in the second half of this year, agentic AI will continue to grow massively in 2025. All major big-tech companies, including, Oracle, Microsoft, Google, and Salesforce, have launched their agentic suite of products, addressing verticals such as HR, finance, operations, supply and more. A couple of months ago, Microsoft announced the ability to create autonomous agents with Copilot Studio

                    Interestingly, Gartner named agentic AI ‘the emerging trend for the year 2025’, indicating that autonomously operating agents would have a significant impact across enterprises. 

                    AI in Healthcare

                    While this may not seem like a new use case, AI applications in the healthcare sector are witnessing an increasing adoption. Not just in healthcare, where medical copilots can assist healthcare practitioners with analysing and summarising reports, discovery and protein research are also seeing huge growth. 

                    Recently, HCG partnered with Accenture, a leading technology company, to use advanced AI to expedite drug discovery and other developments. 

                    Recently, Google DeepMind open-sourced its revolutionary AlphaFold 3 model, making its training weights accessible to researchers for non-commercial use. This has further opened up the possibility of experimenting with drug discovery. 

                    Personal Use of Humanoids 

                    We dubbed 2024 ‘the year of robotics’ as the field saw a surge in research, development, and advancements, including progress with humanoid robots. 

                    Figure, Agility Robotics, Boston Dynamics, and Sanctuary AI, are just a few US companies dedicated to building commercial humanoids. Many of them are already being implemented in automobile companies. However, like in the movie I, Robot, it is possible that humanoids will make their way into our homes too. 

                    Many Chinese tech companies are also speeding through humanoid developments.

                    Meanwhile, Tesla promoted its humanoid Optimus at the We, Robot event as a humanoid that can assist people with their chores and be part of one’s daily life. While fully functional humanoids might still take time to become reality, the year 2025 will be an important one for this. 

                    AI to 3D Systems 

                    Generative AI in 3D modelling is another area that promises to explode in the coming year. Companies such as Common Sense Machines and World Labs are already converting images to 3D. 

                    Unlike generating images or videos from text, using generative AI to convert images to 3D models is considered highly complex owing to the multifaceted nature of 3D data. This requires understanding and manipulating intricate geometries, textures, and lighting. 

                    Now, with consumer brands already implementing AI to churn out advertisements, 3D will add value to this space. Considering how many companies are experimenting in this segment, 2025 will witness a surge in 3D model companies. 

                    Edge Computing

                    Edge computing is becoming more critical than ever before for running LLMs on phones. Interestingly, this year saw major phone makers, such as Samsung and Apple, bring intelligence to their phones. Qualcomm and Apple are integrating AI capabilities into their chipsets, allowing robust edge computing for LLMs. 

                    Theres Payton, former White House CIO, said that by 2025, edge computing would become even more widespread, particularly as AI and IoT expand. 

                    China to Lead the AI Race

                    Co-founder and CEO of Hugging Face Clement Delangue, known for his year-end AI predictions, also made one for 2025. One of his predictions resonates with our suggestion last year that China has a chance to be at the forefront of the AI race. 

                    While the country seems to be in a league of its own, its AI models have been outperforming on the leaderboard. Recently, a powerful open-source model Qwen 2.5 has been clearly leading the AI agents race. Similarly, Baidu has also been making significant developments in the AI field. 

                    ]]>
                    What’s Nutanix Cooking Up for New Bharat? https://analyticsindiamag.com/ai-trends/whats-nutanix-cooking-up-for-new-bharat/ Mon, 02 Dec 2024 04:47:20 +0000 https://analyticsindiamag.com/?p=10142180 Nutanix’s success lies in its ecosystem-driven approach, collaborating with major players such as AWS, AMD, and HPE.]]>

                    At the Nutanix .NEXT India tour event, the cloud software giant elaborated on its role in transforming India into a ‘new Bharat’. It achieves this by fueling innovations in sectors as diverse as banking, automotive, oil and gas, pharmaceuticals, IT/ITeS, and software. 

                    The company highlighted its extensive reach into the public sector, powering government state data centres. It enables critical citizen services, supports national data projects, strengthens hospitals, and even fortifies the Indian defence forces. 

                    These vital institutions rely on Nutanix to build, secure, and sustain the nation’s core infrastructure. Together, they’re not just creating solutions but are shaping the future of India.

                    What’s India to Nutanix?

                    For Nutanix, India represents one of its most dynamic markets globally. “India is among our top-performing markets,” shared Andrew Brinded, chief revenue officer at Nutanix. Collaborations with key partners, such as HPE (Hewlett Packard Enterprise), have fueled the rapid adoption of Nutanix’s hypervisor, AHV (Acropolis Hypervisor), which offers cost savings without compromising on functionality.

                    Historically known for virtual machine management, Nutanix has shifted focus toward containerisation, catering to companies embracing modern, agile infrastructures. “Our unified platform allows IT administrators to manage virtual machines and containers seamlessly, delivering simplicity and efficiency,” Brinded noted.

                    He further highlighted the company’s role in supporting AI-driven innovations across sectors, citing Apollo Pharma as a case study. With Kubernetes deployments powered by Nutanix’s NKP technology, Apollo leverages AI solutions like ‘GPT in a box’ to enhance healthcare delivery for over 200 million people in India.

                    “Nutanix integrates seamlessly with partners like NVIDIA for AI processing and offers robust data services, whether deployed on-premises, at the edge, or in public clouds,” Brinded explained. Customers can implement their preferred large language models (LLMs) on the Nutanix infrastructure, ensuring flexibility and control.

                    This adaptability has made Nutanix a trusted partner in industries ranging from manufacturing and finance to pharmaceuticals and defence.

                    Partnering for Hybrid Cloud Success

                    Nutanix’s success lies in its ecosystem-driven approach, collaborating with major players such as AWS, AMD, and HPE. The company’s exclusive channel model involves working closely with Indian resellers and managed service providers, ensuring a seamless customer experience.

                    “Our hybrid cloud strategies allow customers to integrate public cloud solutions with private infrastructures while maintaining a consistent user experience,” Brinded said.

                    AI in Finance

                    One of the clients Nutanix is closely working with to help it emerge as a leader in automating financial services is HighRadius. Simran Singh, vice president of cloud engineering at HighRadius, offered an insider’s perspective on how it can be brought to reality. “HighRadius operates in the finance domain, primarily automating functions within the office of the CFO,” he shared. 

                    Recognised as a Gartner Quadrant leader in the order-to-cash space, HighRadius has broadened its focus to include treasury, record-to-report, B2B payments, and account reconciliation.

                    Founded with roots in Robotic Process Automation (RPA), HighRadius was quick to adopt AI and machine learning. “For over a decade, we’ve integrated AI into our products, long before the rise of generative AI,” Singh remarked. 

                    It works with around 1,000 customers including global giants like Unilever and Nestlé, delivering transformative financial solutions, streamlining processes, and enabling CFOs to focus on strategic objectives.

                    “Our solutions deliver a 70% efficiency improvement out-of-the-box, reaching up to 90% within six months. This eliminates manual reconciliation and empowers CFO teams to make more impactful decisions,” Singh explained.

                    Addressing Challenges with Hybrid Cloud Strategies

                    As HighRadius scales, it faces growing challenges related to cost management and infrastructure optimisation. With key banking partners like major US banks hesitant to fully embrace public clouds, HighRadius adopted a hybrid cloud approach.

                    “We’ve implemented Nutanix in our data centre and transitioned to hyperscalers with remarkable success, resulting in significant performance improvements across infrastructure and applications,” Singh noted.

                    In an innovative move, HighRadius deployed bare-metal Nutanix servers within AWS, hosting Nutanix Database Service (NDB) instead of relying on RDS Aurora. This unconventional strategy cut costs by 40% while enhancing operational efficiency. 

                    “While the initial proof-of-concept posed challenges, the collaboration between teams made this a success we’re incredibly proud of,” Singh said.

                    HighRadius is now testing Nutanix Karbon Platform Services (NKP) to create a seamless integration between private and public clouds. The long-term vision is to maintain cloud-agnostic operations that ensure scalability and adaptability, with a view to potentially transitioning fully to private clouds in the future.

                    What’s Next?

                    For HighRadius, the focus is clear: Scaling revenue while maintaining its hybrid cloud approach and exploring future transitions. “We’re growing at 40% year-on-year and expanding both organically and inorganically. Our long-term strategy is centred on scalability, agility, and efficiency,” Singh said.

                    For Nutanix, the priority is fostering innovation in India’s burgeoning tech ecosystem, driving the adoption of AI and hybrid cloud solutions across industries. As Brinded aptly put it, “India’s passion for technology and rapid adoption make it one of our best markets for driving growth in emerging areas.”

                    The collaboration between these two tech leaders exemplifies how strategic innovation and partnership can redefine business operations. With HighRadius leveraging Nutanix’s infrastructure expertise, both companies are well-positioned to lead the way in their respective domains.

                    [Update: Nutanix requested to remove the revenue part of HighRadius]

                    ]]>
                    Dear Investors, Our Failure is on You: Startup Founders https://analyticsindiamag.com/ai-trends/dear-investors-our-failure-is-on-you-startup-founders/ Thu, 28 Nov 2024 09:05:57 +0000 https://analyticsindiamag.com/?p=10141893 Over the last five years, 706 AI startups have failed, of which 54 are from India. ]]>

                    Founded in 2012 by Sean Lane and Jeremy Yoder, Olive AI, a healthcare startup, looked to boost operational efficiency by using AI. Over the years, the company raised significant funding from high-profile investors and reached a peak valuation of $4 billion in 2021. 

                    The initial success was promising, with over 900 hospitals adopting this technology. However, rapid, unsustainable growth and strategic miscalculations led to OliveAI’s shutting down in late 2023. “The lack of focus, coupled with the champagne and cocaine mentality brought on by easy VC money totalling almost $1 billion, is what killed yet another AI health startup, Olive AI,” author Sergei Polevikov said.

                    It’s perplexing to think that even with backing from trustworthy investors, a startup can fold due to miscalculations. As surprising as it sounds, data shows that several AI startups backed by prominent investors have shut down in the last five years.

                    When VCs Can Be Counter-Productive

                    In a 2019 interview with OpenAI chief Sam Altman, venture capitalist and Khosla Ventures founder Vinod Khosla said, “I get in a lot of trouble for saying this, [but] 90% of investors add no value. In my assessment, 70% of investors add negative value to a company. That means they are advising a company when they haven’t earned the right to advise an entrepreneur.” 

                    In fact, Khosla has been reiterating this sentiment for over a decade. As a VC who has backed several emerging startups, including OpenAI, one might wonder what drives this critical stance against his own clan. 

                    Well, Khosla has his reasons. 

                    When junior staff asked why they couldn’t join a board like their peers at other firms, Khosla said that it was unfair to entrepreneurs. “Just because you got an MBA and joined a venture firm doesn’t mean you’re qualified to advise an entrepreneur,” he clarified.

                    He believes the key qualification for a VC to offer advice is having built a large company and personally experienced the difficulties, uncertainty, and challenges involved. Without having firsthand experience of the challenges behind running one’s own company, the ability to advise others holds little weight.

                    Holistic VC Power

                    During an interaction with AIM, Pranavan S, founder and CEO of Control One, a Bengaluru-based startup building AI-driven innovation by building physical agents, said, “Given the competitive landscape in the field of AI, VC money plays an important role in the infrastructure, team and research needs. Apart from the funds, a VC network or brand name helps build confidence with customers and also strengthen networking and partnerships.”

                    Founded in 2023, Control One has raised $350,000 from industry leaders including iRobot co-founder Helen Greiner, CRED founder Kunal Shah, and executives from Tesla, Walmart, and General Electric.

                    AI startups have been witnessing a remarkable surge in investments. It was reported that, from April to June this year, AI startup investments touched $24 billion, which was double the previous quarter. 

                    Though VC expertise brings in holistic support to scale a business, conflicts among them are common. Pranav believes these conflicts often stem from mismatched expectations. “At times, founders tend to overpromise and that needs to be kept in check. Once the milestones and expectations are set right, the chances of disagreement drop drastically,” he said. 

                    At Loggerheads

                    AIM gained insights from a VC’s point of view after speaking to Abhishek Prasad, managing partner at Cornerstone Venture Partners.

                    “Friction can arise when VCs behave like bosses and assume roles of mentors or have a know-it-all approach while not bringing any tangible value. In the Indian context, given we are still a young VC ecosystem, we have seen VCs often operate as super angels and not as professional investors in the journey of the founder, leading to situations that often lead to friction,” Prasad said. 

                    Prasad further explained that discord occurs between VCs and founders when they don’t agree on a shared path forward. This friction often arises when VCs push for directions that conflict with the founder’s vision or agenda.

                    “For a VC, the company may be one among many and a game of optimising, whereas for the founder the particular company is all she or he has got. This often leads to mistrust and founders start becoming wary of their relationship with the VC and often stop confiding in VCs, leading to further strain in the relationship,” he added. 

                    AI and Deeptech Startup Evolution 

                    With the rise of AI startups, VC firms investing in AI companies are also transforming, which in turn implies that expectations from them are also evolving. 

                    VC firm Bharat Innovation Fund (BIF), which focuses on deep tech startups, started investing in them around 2018. BIF co-founder Ashwin Raguraman said they focus on what they call globally competitive startups, whether they are competitive today or have the potential to become globally competitive and address global markets.

                    Though BIF has not yet invested in generative AI startups, they are closely monitoring developments in this area, particularly those that go beyond simple applications and demonstrate maturity and innovation. The firm is keen on supporting ‘AI-native’ companies that integrate AI from the ground up rather than those merely adopting existing models.

                    VC-Founder Relationship 

                    Speaking about the relationship between VCs and startup founders at a panel discussion in the recent AI summit Cypher 2024, Ashwin emphasised the importance of VCs questioning founders. 

                    “We ask questions before we invest [and] after we invest. We ask questions as board members, but I think…asking questions especially after we invest is to be a sounding board. We can’t necessarily give answers. The founder has to find the answers because that’s when they internalise it and then execute it. So we’re good at asking questions,” said Ashwin. 

                    Interestingly, when asked about a VC’s involvement in a founder’s business, Ashwin’s answer was pretty clear. “In an ideal situation, we’d like to start at 0% investor hands-on, because then it means that the entrepreneur is delivering, and we don’t need to spend time. They’re going to deliver returns. [But] there are times when that zero goes all the way to 70-80% and that’s not a good situation. I know that when I’m hands-on 70%, we are trying to save something that is perhaps sinking without our effort,” he said. 

                    Cypher 2024 – Panel Discussion with VCs and Startup founders.

                    From left: Arjun Rao, general partner at Speciale Invest, Ashwin Raguraman co-founder & partner at Bharat Innovation Fund, Korak Roy, AIM video presenter, Rimjhim Agrawal, co-founder & CTO at BrainSight AI, and Abhishek Upperwal CEO/founder at Soket Labs

                    VC-Backed Failures

                    Irrespective of how much a VC gets involved in a startup or tries to maintain a balance, the startup can fail. A number of prominent VC-backed startups have failed in the past. 

                    Founder Problems

                    VC guidance or misguidance can often cause startups to falter. In addition, the company board can also have tiffs with the founders. The members, who sometimes comprise investors, can lead to a different kind of friction. 

                    Interestingly, during the interview with Khosla, Altman shared that in risk-driven, decision-making situations, he prefers having a board that calms the entrepreneur, rather than adding to the stress. 

                    It’s bizarre that this statement was made in a 2019 interview, especially considering that just a year ago, Altman found himself at odds with his own board. The board ousted him over accusations of a lack of transparency in his decision-making process. 

                    While that didn’t last long – just over two days, to be precise – he was reinstated as CEO, and his board was disbanded. Eventually, new board members, possibly his allies, were appointed. However, the company has faced a number of high-profile exits, including the departure of co-founders, over the last year. 

                    Reflecting on what Khosla said, it is evident that conflicts regarding ideas and involvement are mostly the reason for counterintuitive performance results. The figure of 70%, however, may not be an absolute number. 

                    ]]>
                    redBus Builds Multilingual AI Copilots for Bus Operators, Travellers https://analyticsindiamag.com/ai-trends/redbus-builds-multilingual-ai-copilots-for-bus-operators-travellers/ Thu, 28 Nov 2024 03:30:00 +0000 https://analyticsindiamag.com/?p=10141810 At redBus, OpenAI’s models power chatbot interactions in India, while Anthropic’s Claude is being tested in Malaysia.]]>

                    With AI copilots trending across industries to assist with various tasks, one of India’s leading online bus ticketing platforms has ensured it is not left behind. redBus, founded in 2006 to address bus ticketing inefficiencies, has created an in-house copilot to help their travel partners run businesses smoothly.

                    Now, their AI copilot for bus operators aims to support their partners by providing them with real-time, data-driven insights that can help streamline operations, improve decision-making, and ultimately enhance the overall customer experience.

                    During an interaction with AIM, Anoop Menon, chief technology officer at redBus, said, “We feel that this tool can really help us improve our efficiencies externally, internally, and with our customers, and our partners.”

                    The AI copilot embedded in redPro, a comprehensive platform built specifically for bus operators, is designed to optimise various aspects of their business. Menon emphasised that while the company doesn’t claim to know its operations better than the operators themselves, the AI tool offers “a whole bunch of data, tools, and inferences, to help them manage their business better”.

                    Actionable insights on route optimisation, inventory management, and revenue growth via historical data and trends analysis are some of the features offered by the copilot.

                    AI for Regional Markets 

                    redBus caters to operators across India who speak varied languages by offering multilingual support. The AI tool allows operators to input queries and receive insights in their native language.

                    “If an operator wants insights in their language, the copilot translates and simplifies the data, enabling them to act accordingly,” Menon said. He believes that this functionality can overcome language barriers and ensure that operators can access vital information without technical hindrances.

                    Notably, big tech players such as Google and Microsoft have developed AI assistants and copilots that can function in multiple languages. Microsoft Copilot is readily available in several Indian languages, including Hindi, Bengali, Tamil and Telugu.

                    Improving Customer Interaction

                    Empowering bus operators is one thing, but redBus is also enhancing customer interactions by using generative AI. It has been experimenting with different AI models to determine the most effective solutions for each region.

                    For example, while OpenAI’s models power chatbot interactions in India, the company is also testing Claude, an AI model developed by Anthropic, in Malaysia. “This gives us the ability to compare outputs and choose the most effective solution for each region,” Menon said.

                    This AI integration has significantly improved redBus’ customer satisfaction scores, which have risen by 40% since the introduction of the chatbot.

                    Menon pointed out that the chatbot handles up to 90% of initial interactions while escalating complex cases to human agents when necessary.

                    At the recent Meta Build Event, which happened in Bengaluru, the company announced partnerships with a number of companies including redBus. 

                    On the operational side, redBus has also seen improvements in its call centre performance. By incorporating AI into customer service, the company has reduced average call or chat times from an average of seven to nine minutes to just two to three minutes. Internally, redBus uses a code-to-doc system, which generates documentation directly from source code, reducing developers’ workloads and accelerating onboarding for new team members.

                    redBus is also venturing into voice-enabled booking systems, allowing customers to interact with the platform through voice commands.

                    AI Challenges Persists

                    On one occasion, Air Canada’s chatbot provided incorrect information, prompting the airline to compensate the customer. Considering how AI hallucinations still remain a concern, redBus has acknowledged the same. However, to address this, redBus implements validation mechanisms within its AI systems to ensure the quality of responses.

                    The company implements human review interfaces, combines AI models for cost efficiency, and trains systems to escalate complex cases to human agents to prevent inaccurate responses.

                    Looking ahead, redBus plans to expand its use of AI across various aspects of its operations, with a particular focus on predictive analytics and dynamic pricing models. “We know that the next big company to challenge us could be a startup. We need to keep innovating and running fast,” Menon concluded.

                    ]]>
                    The Breakthrough AI Scaling Desperately Needed https://analyticsindiamag.com/ai-trends/the-breakthrough-ai-scaling-desperately-needed/ Fri, 22 Nov 2024 10:03:49 +0000 https://analyticsindiamag.com/?p=10141427 TokenFormer enables AI to scale by preserving the existing knowledge while seamlessly integrating new information, redefining long-context modelling and continuous learning.]]>

                    When Transformers were introduced, the entire AI ecosystem underwent a reform. But there was a problem. When a model was large enough, and researchers wanted to train a specific part of it, the only option was to retrain the entire model from scratch. 

                    This was a critical issue. To address it, researchers from Google, Max Planck Institute, and Peking University introduced a new approach called TokenFormer

                    The innovation lies in treating model parameters as tokens themselves, allowing for a dynamic interaction between input tokens and model parameters through an attention mechanism rather than fixed linear projections.

                    The traditional Transformer architecture faces a significant challenge when scaling—it requires complete retraining from scratch when architectural modifications are made, leading to enormous computational costs. TokenFormer addresses this by introducing a token-parameter attention (Pattention) layer that enables incremental scaling without full retraining. 

                    This approach has demonstrated impressive results, successfully scaling from 124M to 1.4B parameters while maintaining performance comparable to Transformers trained from scratch. 

                    Training cost reduced using tokenformer
                    (Training cost reduced significantly using TokenFormer architecture)

                    Explaining the significance of this research, a Reddit user said that it allows for incremental learning. In other words, changing the model size and adding more parameters does not mean you need to train the entire model from scratch. 

                    “Specifically, our model requires only one-tenth of the training costs associated with Transformer baselines. To mitigate the effects of varying training data, we also included the performance curve of a Transformer trained from scratch using an equivalent computational budget of 30B tokens. 

                    “Under the same computational constraints, our progressively scaled model achieves a lower perplexity of 11.77 compared to the Transformer’s 13.34, thereby highlighting the superior efficiency and scalability of our approach,” he added, further suggesting drastically reduced costs via TokenFormers. 

                    Why Scaling Efficiency Matters?

                    One of TokenFormer’s most compelling features is its ability to preserve existing knowledge while scaling, offering a new approach to continuous learning. This aligns with industry efforts to rethink scaling efficiency. When new parameters are initialised to zero, the model can maintain its current output distribution while incorporating additional capacity. 

                    This characteristic makes it particularly valuable for continuous learning scenarios, where models need to adapt to new data without losing previously acquired knowledge.

                    Run your own experiments at lower costs

                    Meanwhile, the architecture has shown remarkable efficiency in practical applications. In benchmark tests, TokenFormer achieved performance comparable to standard Transformers, requiring only one-tenth of the computational budget. 

                    This efficiency extends to both language and vision tasks, with the model demonstrating competitive performance across various benchmarks, including zero-shot evaluations and image classification tasks.

                    TokenFormer’s design also offers advantages for long-context modelling, a crucial capability for modern language models. Unlike traditional Transformers, where computational costs for token-token interactions increase with model size, TokenFormer maintains these costs at a constant level while scaling parameters. 

                    This makes it particularly suitable for processing longer sequences, an increasingly important capability in contemporary AI applications.

                    A Reddit user praised this research, saying, “In a way, what they’ve developed is a system to store knowledge and incrementally add new knowledge without damaging old knowledge; it’s potentially a big deal.”

                    Meanwhile, multiple conversations have been taking place around the technical breakthroughs that will solve the scaling problem like the TokenFormer.

                    At Microsoft Ignite 2024, CEO Satya Nadella highlighted the shift in focus, stating, “The thing to remember is that these are not physical laws but empirical observations, much like Moore’s Law.”

                    He introduced “tokens per watt plus dollar” as a new metric for AI efficiency, emphasising value maximisation. NVIDIA’s Jensen Huang echoed these concerns, calling inference “super hard” due to the need for high accuracy, low latency, and high throughput.

                    “Our hopes and dreams are that, someday, the world will do a ton of inference,” he added, signalling the growing importance of scaling innovations like TokenFormer in the AI landscape.

                    Too Good to be True?

                    Multiple users have called the idea too good to be true and noted some issues in the research paper. A user said on Hacker News that it is hard to trust the numbers shown in the research. “When training a Transformer to compare against it, they replicate the original GPT-2 proposed in 2019. In doing so, they ignore years of architectural improvements, such as rotary positional embeddings, SwiGLU, and RMSNorm, which culminated in Transformer++,” he added. 

                    On the other hand, users from the same thread have praised this approach, saying it looks like a huge deal. “I feel this could enable a new level of modularity and compatibility between publicly available weight sets, assuming they use similar channel dimensions. Maybe it also provides a nice formalism for thinking about fine-tuning, where you could adopt certain heuristics for adding/removing key-value pairs from the Pattention layers,” he added.

                    The user further mentioned that according to this paper, the model can grow or scale dynamically by simply adding new rows (key-value pairs) to certain matrices (like K and V in attention layers). The rows at the beginning might hold the most critical or foundational information, while later rows add more specific or less essential details. 

                    While the approach looks promising on paper, we’ll have to wait for developers to implement it in actual models.

                    ]]>
                    WTF is Nikhil Kamath Doing with Young Entrepreneurs?  https://analyticsindiamag.com/ai-trends/wtf-is-nikhil-kamath-doing-with-young-entrepreneurs/ Fri, 22 Nov 2024 04:30:00 +0000 https://analyticsindiamag.com/?p=10141391 “If a startup can create long-term value while making a tangible impact, that's what seals the deal for me,” Kamath told AIM. ]]>

                    A few months ago, Zerodha co-founder Nikhil Kamath unveiled the ‘Innovators under 25’ initiative, marking the launch of WTFund. The programme selected nine Indian startups led by exceptional founders under 25. 

                    For its first edition, WTFund, a non-equity grant fund, selected 15 entrepreneurs working across various sectors including AI, healthcare and others. The initiative provides up to INR 20 lakh in non-equity grants. Interestingly, the entrepreneurs can retain the full ownership of their startup with Kamath not having any stake in them. 

                    Empowering Young Entrepreneurs

                    (Nikhil Kamath with the startup founders selected for the ‘Innovators Under 25’ programme.)

                    As strange as it may sound, Kamath, as an investor, is not looking to have any equity in these companies; he’s clear about his vision of honing emerging young talent in the country.

                    “For the Innovators under 25 programme, my approach is sector-agnostic, but focuses on startups solving real-world problems, especially in health tech, energy transition, space tech, and AI. The reason for offering non-dilutive grants and not taking equity is simple: It is about empowering young founders to grow without the immediate worry of dilution,” Kamath told AIM

                    In 2025, Kamath plans a similar or larger investment to support early-stage startups and provide them the runway they need. He clarified that his focus was on resilient founders who deeply understood their problem space. 

                    “Does this idea serve a real purpose? Can it scale effectively with strong unit economics? If a startup can create long-term value while making a tangible impact, that’s what seals the deal for me,” said Kamath. 

                    AIM had the opportunity to interview five startups from this programme, which places a strong emphasis on AI and tech. 

                    Equipping Creators with High-Performance Cloud Computing

                    Founded by CS graduate Advait Bansode, Mars Computers aims to disrupt the creative and developer ecosystem by making high-performance computing accessible via the cloud. The startup connects users to data centres through a low-latency pipeline, allowing resource-intensive applications such as Adobe Premiere Pro and DaVinci Resolve to run seamlessly on lightweight devices such as MacBook Air. 

                    “Businesses avoid the hefty cost of buying machines,” said Bansode, emphasising the efficiency of their subscription model. Mars also addresses the flexibility needed for freelancers and small businesses, enabling users to subscribe based on their project requirements rather than investing in permanent hardware. “If I just want to do one project for a month, can I pay for it? That question sparked the idea.”

                    The startup’s abstraction layer ensures that users experience local-computer-level performance while benefiting from scalable cloud solutions. Speaking about their ambitious vision, Bansode said, “We are excited about building tech that hasn’t existed in the market before.”

                    Harnessing DNA for Data Storage

                    Young TED speaker Anagha Rajesh founded BioCompute with the aim to meet the growing demand for sustainable and scalable storage solutions. Leveraging on the remarkable density and longevity of DNA, the startup aims to commercialise its use for archival storage, potentially replacing traditional storage media that require frequent replacements and consume significant energy.

                    “The idea is that DNA can last for thousands of years,” explains Anagha, emphasising the longevity and cost-effectiveness of the technology. By developing an enzymatic approach to DNA synthesis, BioCompute tackles the bottlenecks of cost and scalability, with the ultimate goal of integrating DNA storage into data centres globally. 

                    “If we can demonstrate significantly lower space, energy, and replacement costs, this technology will sell itself,” she said. “Biological systems are inherently more energy efficient.” BioCompute also focuses on reducing the reliance on chemical synthesis methods, adopting biological alternatives to make the process more efficient and environmentally friendly. 

                    AI-Powered Diagnostics for Gastric Cancer

                    Founded by MBBS students Tanmaya Gulati and Ria Khurana, RNT Health Insights is on a mission to improve early detection of gastric cancer using AI. Their solution integrates spatial and temporal deep learning models into real-time endoscopy procedures, helping doctors identify lesions that might otherwise be missed during screenings.

                    “Our models predict in just 30 milliseconds,” said Gulati, showcasing the speed and precision of their technology. The solution acts as a “second set of eyes” for doctors, analysing 30-35 frames per second during endoscopic procedures to detect even the smallest abnormalities. “Sometimes lesions appear for milliseconds on the screen and can be easily missed by doctors,” he explained, underscoring the critical role of AI in filling diagnostic gaps.

                    With India ranking third globally in gastric cancer cases, the founders believe their startup has the potential to save countless lives. The team is also working on making the technology compatible with high-definition endoscopy equipment, ensuring broader adoption across healthcare systems. 

                    “We want to integrate it without interfering with the existing workflow,” said Khurana.  

                    Education with AI-Powered Companions

                    Founded by IIT graduate Sparsh Agarwal, Pixa is creating AI-powered toys that serve as interactive companions for children aged 5 to 12. These toys, integrated with advanced language models and custom memory stacks, function as personalised tutors, teaching topics ranging from programming to healthy habits through voice interactions.

                    “Rather than spending $20 a month on multiple apps, parents get a single, screen-free solution,” said Agarwal. The toys feature advanced AI capable of generating dynamic and personalised content for children, from quizzes to stories, ensuring they never outgrow the experience. 

                    Parents can monitor their child’s progress through an app, which also tracks vocabulary levels and provides summaries of interactions to ensure a safe environment. “We wanted to ensure kids have access to powerful educational tools without increasing their screen time.” 

                    He also said that with AI, users won’t have to encounter the limitations of pre-loaded educational gadgets anymore.

                    (Source: Pixa)

                    Sales Preparation with AI

                    Founded by Bhavesh Kotwani (BITS Pilani), Nikhil Mehta (IIT), and Pooja Midha (2X founder), CallPrep is transforming sales workflows by automating pre-meeting preparation, giving sales reps valuable insights without leaving their usual platforms. 

                    “We are trying to make the information work for the salesperson even before they step into the meeting,” the founders said, demonstrating how their AI-driven tool is helping sales teams scale more efficiently.

                    By integrating seamlessly into calendars and CRM systems, the platform delivers actionable intelligence tailored to each meeting, helping sales teams save time on administrative tasks. “Sales reps are overwhelmed with multiple solutions, but with CallPrep, they don’t need to go outside their platform for insights,” the founders explained.

                    Focusing on mid-market and enterprise-level B2B companies, CallPrep differentiates itself by addressing the underexplored pre-meeting space. Unlike competitors targeting post-meeting insights, CallPrep ensures sales teams are prepared with context-specific data like competitor battle cards before their meetings. 

                    In addition to the startups above, WTFund has invested in businesses that are building sustainable and healthy solutions for humans and pets alike. 

                    Urban Animal offers India’s first dog DNA testing service, revolutionising pet care, while Oh! Nuts caters to health-conscious Indian consumers with premium, nut-based snacks. Pawsible Foods introduces sustainable, plant-based pet food using Kavaka™ mycoprotein, and Pamawel targets menstrual pain relief with its plant-based, non-steroidal, FDA-approved formulations.

                    ]]>
                    Can GenAI Speed Up the Legal Slow Lane? https://analyticsindiamag.com/ai-trends/can-genai-speed-up-the-legal-slow-lane/ Thu, 21 Nov 2024 05:06:15 +0000 https://analyticsindiamag.com/?p=10141343 An Indian advocate observed that “99% of law firms don’t use case management systems; most lawyers still rely on diaries or simple Excel sheets to track cases”. ]]>

                    When an airline passenger based outside India faced an issue during his travel, instead of hiring a lawyer, the man decided to use ChatGPT to draft a legal notice for the airline—and received thousands of dollars in compensation. 

                    So, while AI is raring to go, the legal community remains split on its role in the industry.

                    “AI can only be adopted for non-crucial tasks in the legal landscape,” said Prasad Karhad, founder and director of Patent Attorney Worldwide, in an interview with AIM Media House. “Though it holds the scope of replacing the typist and beyond.”

                    In an episode of the All-In podcast, David Sacks, former CEO of Yammer, reflected on the potential of large language models (LLMs) in revolutionising legal services. Early predictions were bold, with startups quickly emerging to disrupt the legal industry by leveraging AI. However, Sacks pointed out a crucial flaw—error rates.

                    “When applying AI to any industry, one must consider the margin for error,” Sacks noted. “We know AI systems can make mistakes; they hallucinate. In legal services, even a small error can have huge consequences since this is an industry where accuracy is non-negotiable.”

                    Sacks’ observation underscores a significant challenge: While AI has the potential to streamline legal processes, the stakes are too high for errors. Legal professionals, therefore, have been hesitant to fully embrace AI technologies.

                    Lawyers to be Blamed?

                    In another podcast, Jake Heller, co-founder and CEO of Casetext, shared his frustrations with outdated legal technology. Recalling his time as a lawyer, he described how inefficient tech hampered his work.

                    “In 2012, if I wanted to find movie times or locate a vegetarian-friendly restaurant, I could do it instantly on my iPhone,” Heller said. “But if I needed to find a critical piece of evidence to exonerate a client, I’d be stuck sifting through piles of documents, working until 5 am for days.”

                    Heller’s experience vividly captures the harsh reality many lawyers face, struggling to navigate archaic systems, drowning in paperwork and relying on clunky, outdated research tools. 

                    The inefficiencies in legal workflows became a major pain point for him—a problem he set out to solve.

                    Even Indian lawyers are not open to adopting tech. Sandeep Hegde, an advocate at Platinum Legal, said, “About 99% of law firms don’t use CRMs or case management systems. Most lawyers still rely on diaries or simple Excel sheets to track cases.”

                    In an exclusive conversation with AIM, Hegde explained that the legal industry’s aversion to change is deeply rooted in tradition. “As lawyers, we’re bound by precedent, by old laws that dictate how things should be done. There’s a mindset that if something worked before, it will continue to work, so why change?”

                    For now, while AI can enhance certain aspects of legal work, full-scale disruption remains out of reach.

                    Trouble in Legal Tech

                    To better understand the tech adoption in the Indian legal system, AIM spoke to startups in the industry. 

                    Himanshu Gupta, founder & CEO at Lawyered, explained, “My goal is to reduce the cost of legal assistance to as low as one rupee per minute, similar to how talk time was once charged for speaking with a lawyer. I want to make this affordable service available to the rural population in their local languages. The aim is not to profit from native AI, but to extend basic legal aid to every corner of the country.”

                    He plans to introduce revenue-generating services, such as documentation drafting and other tangible legal solutions, which will be rolled out alongside AI-driven resolutions.

                    But are lawyers ready to embrace this shift?

                    A lawyer told AIM that she wouldn’t be willing to work for such low fees. 

                    Also, when it comes to contributing to tech development, the legal community seems hesitant.

                    Inspired by platforms like Stack Overflow and Wikipedia and believing that improving the technology and the content lawyers used would lead to success, Heller and his team launched a user-generated content (UGC) model, encouraging lawyers to annotate case law. However, it failed.

                    “Lawyers bill by the hour, and their time is extremely valuable,” Heller explained. “They didn’t have the time to contribute to a UGC platform. We realised we had to pivot.”

                    The solution came through AI. Heller’s team began leveraging machine learning and natural language processing—what we now refer to as AI—to automate key aspects of legal research. Instead of relying on UGC, AI could identify the patterns and citations within case law that were essential to lawyers.

                    Though their early efforts brought incremental improvements, Heller noted that it wasn’t until the launch of CoCounsel that they achieved something truly transformative. CoCounsel applied advanced AI to flag cases, streamline research, and improve legal workflows. 

                    However, some lawyers were still resistant to change.

                    “Many successful lawyers were making millions annually. Why would they want to disrupt their way of working, even if the tech could make them more efficient?” Heller said.

                    Lawyers Can’t Market Themselves 

                    Well, the lawyers couldn’t be faulted for not agreeing to the price and tech terms. The Rule 36 of the Bar Council of India (BCI) specifically prohibits advocates from soliciting work or advertising their services in any manner. 

                    So, how will the clients know which lawyer to reach out to for which case? 

                    Lawyered has conceptualised this issue and developed a product called LOTS (Lawyer on the Spot), which is India’s first on-road legal assistance platform. “With LOTS, we provide immediate on-call legal assistance for situations that can be resolved over the phone, such as when drivers face unjust penalties or unlawful charges.”

                    He further mentioned that they’ve established a network of 70,000 lawyers, covering every 50 kilometres along the highways, so that in cases of accidents, theft, or auto robbery if a vehicle owner needs a lawyer on-site, they can mobilise legal help within two working hours.

                    LOTS addresses these incidents by offering comprehensive legal solutions, from immediate assistance to court representation, if necessary. Over time, this platform has evolved into a flagship product for Lawyered, setting a new standard in on-road legal support across India.

                    Another major issue with legal tech is that it’s often designed by techies for legal professionals. Advocate Nagalakshmi S from Platinum Legal, Bengaluru, pointed out that many legal software demand anywhere between INR 50,000 to INR 2,50,000 when asked for a trial version.

                    “When we inquire about a demo, the first question they ask is whether we can afford it,” Nagalakshmi said. “These tools are clearly built for corporate legal teams, not for law firms tackling cases in court. They’re designed for a corporate framework, not to solve real-world courtroom challenges.”

                    What’s Next?

                    During the inauguration of the National Judicial Museum and Archive in Delhi, former Chief Justice of India DY Chandrachud interacted with an AI lawyer and asked, “Is the death penalty constitutional in India?”

                    To this, the AI lawyer, in the form of a spectacled man wearing an advocate’s bow tie and coat, answered, “Yes, the death penalty is constitutional in India. It is reserved for the rarest of rare cases as determined by the Supreme Court, where the crime is exceptionally heinous and warrants such a punishment.”

                    Though it sounds interesting, as of now, most tech companies working in the legal sector are focused on research. 

                    Lexlegis.ai, an advanced LLM, is taking the lead in accelerating legal research in India. Speaking at Cypher 2024 – India’s biggest AI conference, hosted by AIM Media House, Saakar Yadav, founder and CMD at Lexlegis.AI, emphasised, “Legal research is not just about finding judgments; it’s about finding precise answers”.

                    He highlighted the complexity of the country’s legislation quoting the Income Tax Act in India, which has undergone more amendments than any other law globally. And so, lawyers and citizens alike often face the challenge of sifting through countless documents, statutes, and circulars to answer even basic legal questions. 

                    “We’re solving that problem,” Yadav declared, explaining that people don’t want just a list of documents; they want direct, meaningful answers and Lexlegis.ai aims to provide that. 

                    AI’s impact is equally transformative in the realm of legal research and discovery. Algorithms have long been employed to sift through vast amounts of data in lawsuits, and now ML techniques are optimising this process even further. 

                    Services like CS Disco provide AI-driven solutions that assist law firms in identifying relevant documents while navigating the complexities of legal restrictions. 

                    Additionally, platforms such as Westlaw Edge have integrated advanced semantic search capabilities, enabling attorneys to delve deeper into legal texts with greater understanding and insight. Features like Quick Check can even flag potentially outdated case citations, ensuring attorneys remain well-informed in their arguments.

                    However, India already has platforms like SCC Online, LiveLaw, and ECourts that are widely used by lawyers. With just a keyword, these tools can instantly pull up relevant case details. Also, they offer live updates on ongoing cases, providing information at a lawyer’s fingertips in real time.

                    According to Hegde, legal and AI as a combination to be expected in the next 20-30 years and legal research as of now seems like a black hole. 

                    Source: LinkedIn

                    Although Zerodha co-founder Nikhil Kamath is ambitious, the reality may be far from what was expected. 

                    [Update: Quote by Advocate Nagalakshmi S from Platinum Legal, Bengaluru has been revised]

                    ]]>
                    The Year Microsoft Built ‘UI for AI’ https://analyticsindiamag.com/ai-trends/the-year-microsoft-built-ui-for-ai/ Wed, 20 Nov 2024 11:31:38 +0000 https://analyticsindiamag.com/?p=10141297 More than just tools for automation, Microsoft’s AI agents potentially embody a philosophy of personalisation and proactivity. ]]>

                    The year 2024 was pivotal for AI in the workplace as Microsoft expanded its Copilot capabilities, shaping what CEO Satya Nadella termed ‘the UI for AI’. From Copilot’s debut in Microsoft 365 to the introduction of Copilot Actions, SharePoint Agents and Copilot Studio, this suite of AI tools redefined how users interact with technology, promising not just enhanced productivity but also the ability to offer personalised experiences at scale.

                    Nadella’s assertion claiming “Team Copilot can even be your project manager” reflects the company’s vision of AI as an integral assistant, empowering individuals and enterprises alike.

                    The Evolution of Copilot

                    At the heart of Microsoft’s Ignite 2024 announcements were enhancements to Copilot that allowed users to automate repetitive tasks and create custom AI agents tailored to specific workflows. These tools integrate seamlessly across Microsoft’s ecosystem, including Teams, SharePoint and Planner.

                    For instance, SharePoint agents provide contextual insights from stored content, while Teams Facilitator agents handle tasks such as real-time meeting transcription and translation in multiple languages, enhancing global collaboration.

                    Emphasising the practical impact of these developments, Nadella said, “It’s not about tech for tech’s sake but about translating it into real outcomes.” This philosophy aligns with Copilot Analytics, which helps organisations track performance metrics like sales and marketing trends, and Copilot Studio, where users can create bespoke AI workflows.

                    A Win for the Consumers

                    The introduction of Copilot Labs and Copilot Vision provides additional resources for developers to experiment with generative AI and enhance their productivity across diverse scenarios.

                    Copilot Vision integrates voice and vision capabilities, enabling multimodal inputs for tasks like interpreting visual data, generating contextual suggestions, and assisting with design workflows. This makes it particularly valuable for creative and technical projects. 

                    Alongside a redesigned interface that enhances accessibility and ease of use, Microsoft’s collaboration with Inflection AI has further refined Copilot’s responsiveness and adaptability. 

                    Microsoft also unveiled updates to GitHub Copilot, an AI coding assistant that now supports Anthropic’s Claude 3.5 Sonnet. The new features also include support for coding in Hindi, aiming to make the tool accessible to a broader developer community. 

                    Expanding its reach beyond GitHub, Microsoft enhanced Windows Copilot, incorporating features like ‘Recall’ to assist users in revisiting previous interactions or workflows. The updates reflect Microsoft’s broader vision of making AI an integral part of everyday computing experiences.

                    From Features to Functionality: The AI Assistant Revolution

                    Microsoft’s AI agents are more than just tools for automation. They potentially embody a philosophy of personalisation and proactivity, a messaging unique to Microsoft. SharePoint Agents, for instance, allow users to customise AI assistants for specific files, folders, or sites while respecting established permissions. These agents can sift through organisational data to provide actionable insights, effectively transforming SharePoint into a dynamic knowledge hub. Similarly, facilitator agents in Teams and Interpreter Agents capable of real-time multilingual translation enrich collaboration by breaking down barriers in communication and decision making.

                    The personalisation doesn’t end there. Copilot Studio empowers users to create bespoke AI agents tailored to their workflows. These agents can perform autonomous actions like responding to events, managing sales orders, or executing routine IT tasks, all while adapting to user-specific needs. By granting individuals the power to mould AI as per their preferences, Microsoft blurs the line between AI as a utility and AI as a collaborator.

                    Business Impact: A Case Study in Vodafone

                    The real-world implications of these innovations are already visible. Vodafone’s deployment of Copilot AI virtual assistants has resulted in 45 million customer interactions per month, reducing call handling times by over a minute and saving the company an estimated $50 million annually. Such figures illustrate the “math that matters”, a Microsoft representative highlighted, underscoring AI’s role in optimising both cost and customer satisfaction.

                    The emphasis on measurable ROI underscores Microsoft’s approach to AI. By weaving productivity analytics into its solutions, such as with the forthcoming Copilot Analytics, companies can track how AI influences their KPIs. 

                    In the keynote, Nadella summarised this mindset by saying, “It’s not about tech for tech’s sake, but translating it into real outcomes.”

                    A Collaborative AI Ecosystem

                    The keynote also highlighted Microsoft’s partnerships with major players like Nvidia and AMD, underscoring the collaborative ethos in building AI infrastructure. Nvidia CEO Jensen Huang lauded Copilot for improving productivity within Nvidia’s operations and acknowledged the rapid development of the world’s fastest AI supercomputer on Azure. This synergy between hardware innovation and software application reinforces Microsoft’s leadership in AI development.

                    In addition, Microsoft’s advancements in custom silicon, such as the Azure Maia AI accelerator, promise to optimise performance and sustainability in AI processing, further bolstering Copilot’s capabilities. These technological underpinnings ensure that AI tools are not just powerful but also scalable and secure.

                    Challenges and Future Directions

                    Despite its promise, Microsoft’s vision for an AI-integrated workplace is not without challenges. Privacy and ethical considerations loom large as organisations adopt increasingly sophisticated AI tools. Nadella assured audiences that data protection remains a core priority, with Copilot following strict user permissions. Yet, as AI becomes more deeply embedded in personal workflows, maintaining trust will require ongoing vigilance.

                    Looking ahead, the success of tools like Copilot hinges on their ability to balance automation with user control. Microsoft’s introduction of Copilot Studio, where users can design their own AI agents, is a step toward democratising AI development. Nadella encapsulated this ethos and said, “Think of Copilot Analytics as a tool for all of us to change how work, workflow, and work artefacts are getting done.”

                    Microsoft’s vision of Copilot as the ‘UI for AI’ is not just about enhancing productivity, it is about reimagining the workplace. By combining AI with user-friendly design, the company is building a future where AI is as integral to work as the internet once was. 

                    With its ambitious roadmap and strong industry partnerships, 2024 might indeed be remembered as the year Microsoft transformed how we interact with technology, making AI a trusted and ubiquitous assistant in the modern workforce.

                    ]]>
                    What Does the NVIDIA-Reliance Relationship Mean for India?  https://analyticsindiamag.com/ai-trends/what-does-the-nvidia-reliance-relationship-mean-for-india/ Wed, 20 Nov 2024 10:41:10 +0000 https://analyticsindiamag.com/?p=10141280 "India will be one of the biggest intelligence markets and it is not only our aspirations but also the raw gene pool and the raw gene power that exists in India," said Reliance chief Mukesh Ambani. ]]>

                    A recent fireside chat between Reliance chief Mukesh Ambani and NVIDIA CEO Jensen Huang during the NVIDIA AI Summit held in Mumbai created a lot of buzz in the Indian tech ecosystem. Mohandas Pai, chairman of Aarin Capital and former Infosys executive, believes that Reliance’s entry into India’s cloud industry is set to drive competition, lower costs, and boost data localisation. 

                    During an interaction with AIM, Pai revealed that India currently has around 950 megawatts of cloud capacity, which is expected to grow to approximately 3,000 to 4,000 megawatts over the next three to five years. Achieving this expansion would require an investment of ₹50,000-₹60,000 crore.

                    With initiatives such as free data for Jio subscribers, Reliance is strengthening its position while promoting data retention within India, which is a strategic move for data security. Its capacity for large-scale data storage could push competitors like Microsoft to also host data domestically.

                    Talent Workforce 

                    At the summit, Huang called NVIDIA ‘AI in India‘, while Ambani underscored how ‘Vidya’ means knowledge in Sanskrit. The two leaders stressed NVIDIA’s active collaboration in the Indian market.

                    India will be one of the biggest intelligence markets and it is not only our aspirations but also the raw gene pool and the raw gene power that exists in India,” Ambani said.

                    He believes that India’s youth power will drive advancements in intelligence and that, once achieved, intelligent services will extend beyond software to seamlessly integrate with the rest of the world.

                    With Reliance’s plans to build a gigawatt-scale AI-ready data centre in Jamnagar in place, the question of whether there is sufficient talent arises. Pai, however, clarified that this would not be an issue. He mentioned how IT companies are already managing and maintaining data centres globally. He believes that with ample talent available and given the right opportunities, India can excel in this field. 

                    “For example, Yotta, with Hirandhani, created a very large centre for billions [of] dollars in Navi Mumbai. I mean, that is a hyperscale. As India’s first hyperscale, this shows there is enough talent, [and it] is not a problem for the country,” he said. 

                    Similarly, Nasscom’s Vipul Parekh also believes India houses a large talent pool. “I feel that India already has 20% of the AI engineers in the world. With the numbers exponentially rising, and with more and more sophisticated courses being offered in our institutions, the proportion can only rise,” said Parekh while interacting with AIM

                    “India used to cater to much of the backend and outsourcing of software earlier. It should be at the front end of solutions, five years down the line,” Parekh added.

                    Interestingly, Pai had an interesting take on the pattern of data centre developments, which is majorly happening in states such as Gujarat and not in Karnataka. 

                    He said that the state faces challenges due to bureaucratic hurdles and slow policy implementation, unlike Gujarat or Tamil Nadu, where the government is more efficient. He suggested that the Karnataka government could attract cloud companies by offering cheaper land and reliable power.

                    “The Karnataka government has to go after them. We need a group, we need a policy to attract at least 10-12 big players in the cloud to set up data centres in Bangalore,” Pai added.

                    Affordable Intelligence 

                    NVIDIA founder and CEO Jensen Huang with Reliance chairman and MD Mukesh Ambani at the NVIDIA Summit. Source: Jio

                    Referring to Jio, Ambani highlighted the affordable data revolution that the company brought to India. “Apart from the US and China, India has the best digital connectivity infrastructure – 4G, 5G and broadband,” he said.

                    “Jio took India from number 158 in the world to number one in eight years. We as a single company didn’t know anything about this domain, but today we are the largest data company in the world,” Ambani added. 

                    The reference highlights the ‘intelligence’ vision that Reliance wants to bring to the country. By leveraging India’s vast market to provide affordable data, where the company has been able to deliver around 16 exabytes at a low cost of 15 cents per GB, which is significantly less compared to a global average of $35 per GB. 

                    In the process, Jio has driven data adoption and provided benefits to users, delivering annual customer value estimated at $500 to $700 billion. Reliance now aims to re-create this strategy in the emerging intelligence revolution. 

                    AI for India

                    At Jamnagar, Reliance is preparing for a large-scale expansion by building infrastructure capable of supporting 1GW of power, with the potential to expand multiple gigawatts at a single location. The facility will use NVIDIA’s latest Blackwell chips, making it one of the first big Indian tech companies to receive the most powerful chip from NVIDIA.  

                    At the company’s 47th Annual General Meeting in September, Reliance unveiled several AI products, including Jio Brain, Jio AI-Cloud, and Jio Phone Call AI. The Ambanis also discussed their broader AI infrastructure vision for India, which was also discussed during the NVIDIA summit. 

                    Interestingly, the Jio AI products will ultimately be available to Jio users, thereby tapping into their large base of customers. However, how this would affect future pricing for Jio users remains to be seen. 

                    NVIDIA and IT

                    Not just Reliance, but a number of other Indian companies announced their collaboration with NVIDIA. IT giants including Infosys, TCS, Tech Mahindra and Wipro have announced their partnership with the chip giant. Their aim is to create new jobs and train developers to equip themselves with AI. 

                    “In the last couple of years we’ve been working together to upskill and we’ve now upskilled about 2,00,000 IT professionals into the world of AI,” said Huang, who underscored his vision – ‘start locally, grow globally’. Customers will utilise custom models using NVIDIA NeMo and NIM microservices for their personalised needs. 

                    Building Jio’s Ecosystem 

                    At the summit, Ambani also explained his plans to establish a development centre in India to train a large number of developers—potentially hundreds of thousands—in the use of core foundry and Omniverse tools. This initiative aims to equip Indian developers with advanced enterprise skills, enabling them to apply AI and intelligence solutions effectively in real-world scenarios across industries.

                    “I can assure you that like like we did in data, in a few years from now we will surprise the world with what India and Indians can achieve in the intelligence market,” he further said. 

                    While it cannot be overlooked that Huang’s focus on India is potentially boosting the country’s development, it is also helping NVIDIA grow significantly. As mentioned earlier, catering to a population of 1.4 billion to bring intelligence at scale will require a substantial amount of GPU power—and who better than Huang and NVIDIA to bridge that gap? Ultimately, India, Reliance, and NVIDIA all stand to benefit.

                    Looking at the AI advancements that India is gearing for, Pai believes that the country should have a large-scale fund allocated just for AI. “India needs to create a ₹10,000 crore AI and robotics fund that will invest in startups. We need that because a lot of startups are coming, and they need capital,” he concluded. 

                    ]]>
                    Goodbye Vanilla RAG, Agentic RAG is Here https://analyticsindiamag.com/ai-trends/goodbye-vanilla-rag-agentic-rag-is-here/ Tue, 19 Nov 2024 11:30:00 +0000 https://analyticsindiamag.com/?p=10141161 With agentic RAG, it seems like the conversation around fine-tuning and RAG is finally dead, as agents are now helping in reasoning.]]>

                    Everyone loves retrieval-augmented generation (RAG). It has revolutionised how AI systems process and respond to user queries by leveraging external knowledge sources. At the same time, everyone wants to replace RAG with something new as it doesn’t meet all the diverse needs of modern enterprises.

                    As the demands for nuanced, complex, and adaptive AI systems grow, the traditional RAG approach—often dubbed vanilla RAG—is reaching its limitations. This is where agentic RAG comes into play. Agentic RAG represents an advanced architecture that combines the foundational principles of RAG with the autonomy and flexibility of AI agents, promising a future where AI systems are more adaptive, proactive, and intelligent.

                    What Exactly is Agentic RAG?

                    Armand Ruiz, VP of product-AI platform at IBM, shared on LinkedIn that agentic RAG is here, and it aligns with the future of AI, which he believes is also agentic. He posted the GitHub repository for LangChain agentic RAG system using IBM’s Granite 3.0 8B Instruct model on Watsonx.

                    Regardless, in its conventional form, vanilla RAG involves a linear pipeline where user queries are processed through retrieval, reranking, synthesis, and response generation. While it effectively generates grounded and contextually relevant answers, vanilla RAG struggles with flexibility. It relies heavily on predefined knowledge sources, lacks mechanisms for validating retrieved data, and operates as a one-shot retriever without iterative refinement.

                    Agentic RAG addresses these shortcomings by integrating AI agents into the RAG pipeline. These agents act autonomously, orchestrating complex tasks like planning, multi-step reasoning, and tool utilisation. This agentic approach transforms static retrieval systems into dynamic frameworks capable of adapting strategies based on evolving data and user needs.

                    At the core of agentic RAG is the ability to incorporate agents at various stages of the RAG pipeline. It allows users to build systems with complete autonomy to reason and execute specific tools when needed.

                    Technology partner manager Erika Cardenas, and machine learning engineer Leonie Monigatti at Weaviate explained that Agents determine whether external knowledge is needed, select the appropriate retrieval tool (e.g., vector search, web search, APIs), and formulate queries tailored to the task.

                    Further, instead of relying on the initial retrieved data, agents validate its relevance and re-retrieve if necessary, ensuring the final output aligns with the user’s intent. Agents can also access diverse tools, from calculators and email APIs to web searches and proprietary databases, significantly broadening the scope of what can be retrieved and processed.

                    With agentic RAG, it seems like the conversation around fine-tuning and RAG is finally dead. Agents can resolve queries with unparalleled accuracy and speed by retrieving information from community forums, internal knowledge bases, and documentation. 

                    It is akin to a model being fine-tuned at the time of inference, also what some people call reasoning using multiple models. This is what LlamaIndex also calls the agentic RAG framework—adding LLM layers to reason over inputs and post-process the outputs.

                    This architecture isn’t limited to single-agent systems. In more advanced setups, multiple agents collaborate under the guidance of a meta-agent, with each specialised in tasks like summarising internal documents, retrieving public data, or analysing personal content like emails and chat logs.

                    RAG is Not the Answer?

                    Not everyone likes RAG. Amit Sheth, the chair and founding director of the Artificial Intelligence Institute of South Carolina (AIISC), replied to Ruiz’s post, claiming RAG bothers him in principle. “You need RAG because the core/backend/main AI system is inadequate,” Sheth said, adding that RAG systems are needed because the core AI systems are not good enough for accurate information, which makes it a loss of effort that went into building them. 

                    Moreover, according to several researchers, if an agentic RAG thinks longer and gives no response because no information is available in the database, it is just a waste of compute, and thus, it does not scale with more compute. 

                    In a bid to move beyond RAG, Google introduced a new approach—retrieval interleaved generation (RIG)—with its DataGemma model. This technique integrates LLMs with Data Commons, an open-source database of public data. 

                    With RIG, if the AI model needs more current or specific data, it pauses to search for this information from reliable external sources like databases or websites. The model then seamlessly incorporates this newly acquired data into its response, alternating between generating content and retrieving information as needed. 

                    When it comes to agentic RAG, however, the system dynamically adapts retrieval strategies, accessing varied tools and knowledge sources beyond static databases. With iterative retrieval and reasoning, agents ensure the data they retrieve is accurate and relevant. 

                    Agents anticipate user needs and take preemptive actions, enabling a smoother and more efficient interaction process. This proactive and adaptive nature makes agentic RAG particularly effective in scenarios requiring detailed reasoning, multi-document comparison, and comprehensive decision-making. In this context, even approaches like RIG become less relevant.

                    ]]>
                    AI is the New BI https://analyticsindiamag.com/ai-trends/ai-is-the-new-bi/ Tue, 19 Nov 2024 09:34:01 +0000 https://analyticsindiamag.com/?p=10141162 ThoughtSpot’s latest autonomous AI agent aims to address data interpretation challenges by enabling conversational interactions with data.]]>

                    AI combined with business intelligence (BI) could be a game-changer for analysts and consultants alike. Analysts now have more tools at their disposal enabling them to scan large datasets for patterns and anomalies using NLP. Now, with autonomous AI agents taking over analytics, companies already leveraging AI are stepping into the next phase.

                    California-headquartered BI and analytics company ThoughtSpot, with a significant presence in Bengaluru, recently launched its latest innovation: Spotter, the agentic AI tool that functions as a virtual analyst to assist businesses. 

                    “When we launched Spotter, one of the first things we did was open it up so anyone can start using it right away. It’s one of the coolest AI agents for analytics out there,” said Ketan Karkhanis, the new CEO of ThoughtSpot, in an exclusive interaction with AIM

                    Conversational AI for BI

                    From its inception, ThoughtSpot has aimed to demystify data for users. “We have been on a mission to let people really understand what’s happening in their data,” Karkhanis said, emphasising the company’s effort to redefine self-service BI. However, he believes that self-service BI, as traditionally implemented, is a flawed concept.

                    “Self-service is the biggest hoax in the industry. When they say self-service, they essentially mean ‘Go build your own dashboards’,” he said, adding, “Do you wake up in the morning and say, ‘I want to build some dashboards today’? No, you just want to run your business.”

                    With Spotter, ThoughtSpot aims to address these challenges by enabling conversational interactions with data. 

                    Karkhanis highlighted that since humans do not communicate in the language of data, and nor does data communicate on human terms, there’s a need for an interface that caters to both. Spotter addresses this challenge by allowing users to pose complex, multi-step questions in natural language and obtain accurate, contextual responses.

                    “Across the world, customers are looking at their BI stack and realise that this is not meant to solve problems in the future. It’s hard for them to bolt AI to that stack; they can’t simply put AI on it. They need an analytics platform that’s built ground up for AI – and that’s driving ThoughtSpot’s growth,” said Karkhanis. 

                    Interestingly, Spotter integrates seamlessly with all leading cloud platforms and LLMs, including the GPT-series models and Google Gemini

                    Copilot and Agents

                    Thoughtspot was co-founded in 2012 by Ajeet Singh, who had also co-founded cloud computing company Nutanix

                    With LLMs emerging as powerful tools for data interpretation and the rise of copilot-enabled solutions, analysts now have an abundance of options at their fingertips. Moreover, there are now plenty of AI agents introduced in the enterprise suite of products by big tech companies, such as Oracle, Microsoft, Salesforce and others. However, not all may be suited for analysis.   

                    “Many customers have told us that they spent six months trying to make it work on ChatGPT, but it doesn’t. ChatGPT never claimed that it would work. It’s not interested in this since it’s not its primary business,” co-founder Singh told AIM

                    Interestingly, the definition of agents is also being debated. Recently, Salesforce co-founder and CEO Marc Benioff criticised Microsoft’s Copilot as disappointing and not delivering accuracy. 

                    “I have yet to find anyone who’s had a transformational experience with Microsoft Copilot or the pursuit of training and retraining custom LLMs. Copilot is more like Clippy 2.0,” he said

                    The discussion on copilot and agents also brings into focus the definition of what each one is. Karkhanis highlights the clear distinction between assistive tools and true autonomous agents. He explains that while many systems today, such as Microsoft’s Copilot, operate on single-turn Q&A, answering one question at a time, they lack reasoning, adaptability, and ability to learn a user’s business to be called autonomous. 

                    “There are a lot of nuances to this. If you can’t coach it, then it’s not an agent. I don’t think you can coach a copilot,” said Karkhanis. “You can write custom prompts [but] that’s not coaching.” 

                    ThoughtSpot’s vision has been to enable users to understand data through a conversation with their data. “The company has always approached relational search as being the ‘Google for data.’ Now, it positions itself as the ‘Google plus ChatGPT for data,’” concluded Kharkanis. 

                    ]]>