Generative AI – Analytics India Magazine https://analyticsindiamag.com AIM - News and Insights on AI, GCC, IT, and Tech Thu, 20 Feb 2025 07:02:51 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2025/02/cropped-AIM-Favicon-32x32.png Generative AI – Analytics India Magazine https://analyticsindiamag.com 32 32 Sakana’s AI CUDA Engineer Delivers Up to 100x Speed Gains Over PyTorch https://analyticsindiamag.com/ai-news-updates/sakanas-ai-cuda-engineer-delivers-up-to-100x-speed-gains-over-pytorch/ Thu, 20 Feb 2025 05:54:24 +0000 https://analyticsindiamag.com/?p=10164184 The AI CUDA Engineer has successfully translated more than 230 out of 250 evaluated PyTorch operations.]]>

Japanese AI startup Sakana AI has introduced The AI CUDA Engineer, an agentic framework that automates the discovery and optimisation of CUDA kernels for improved GPU performance. 

The company claims the framework can generate CUDA kernels with speedups ranging from 10 to 100 times over common PyTorch operations and up to five times faster than existing CUDA kernels used in production.

CUDA is a low-level programming interface that enables direct access to NVIDIA GPUs for parallel computation. Optimising CUDA kernels manually requires significant expertise in GPU architecture. Sakana AI’s new system uses LLMs and evolutionary optimisation techniques to automate this process, making high-performance CUDA kernel development more accessible.

“The coolest autonomous coding agent I’ve seen recently: use AI to write better CUDA kernels to accelerate AI. AutoML is so back!” said Jim Fan, senior research manager and lead of embodied AI at NVIDIA. He added that the most impactful way to utilise compute resources is by enhancing the future productivity of that very same compute.

According to Sakana AI, The AI CUDA Engineer converts standard PyTorch code into optimised CUDA kernels through a multi-stage pipeline. Initially, it translates PyTorch operations into CUDA kernels, often improving runtime without explicit tuning. The system then applies evolutionary optimisation, using strategies such as ‘crossover’ operations and an ‘innovation archive’ to refine performance.

“Our approach is capable of efficiently fusing various kernel operations and can outperform several existing accelerated operations,” the company said. The framework builds on the company’s earlier research with The AI Scientist, which explored automating AI research. The AI CUDA Engineer extends this concept to kernel optimisation, using AI to enhance AI performance.

Sakana AI reported that The AI CUDA Engineer has successfully translated more than 230 out of 250 evaluated PyTorch operations. It has also generated over 30,000 CUDA kernels, of which over 17,000 were verified for correctness. Approximately 50% of these kernels outperform native PyTorch implementations.

The company has made the dataset available under a CC-By-4.0 licence on Hugging Face. It includes reference implementations, profiling data, and performance comparisons against native PyTorch runtimes.

Sakana AI has also launched an interactive website where users can explore the dataset and leaderboard rankings of optimised kernels. The platform provides access to kernel code, performance metrics, and related optimisation experiments.

]]>
ECI Mandates Labelling of AI Generated Content in Political Campaigns https://analyticsindiamag.com/ai-news-updates/eci-mandates-labelling-of-ai-generated-content-in-political-campaigns/ Thu, 16 Jan 2025 07:22:33 +0000 https://analyticsindiamag.com/?p=10161532 Disclaimers must accompany campaign advertisements or promotional materials utilising synthetic content.]]>

Ahead of the Delhi assembly elections, The Election Commission of India (ECI) has reinforced its directive for political parties to label and disclose AI-generated and synthetic content used in election campaigns. 

The advisory mandates that parties explicitly label images, videos, audio, or other materials significantly altered by AI technologies with notations such as “AI-Generated,” “Digitally Enhanced,” or “Synthetic Content.”

Additionally, disclaimers must accompany campaign advertisements or promotional materials utilising synthetic content.

Chief Election Commissioner Rajiv Kumar has consistently warned about the dangers of AI and deepfakes exacerbating misinformation. In a statement, he emphasised that such technologies have the potential to undermine public trust in electoral processes. 

In a letter to the presidents, general secretaries, and chairpersons of all national and state-recognised political parties, ECI Joint Director Anuj Chandak highlighted how advancements in AI have enabled the creation of highly realistic synthetic content, including images, videos, and audio.

Acknowledging the growing impact of AI-generated and synthetic content on public opinion, the Election Commission has urged political parties, their leaders, candidates, and star campaigners to prominently label such content when shared on social media or other platforms during campaigns.

Last year, during the Lok Sabha elections, the ECI issued guidelines for the ethical and responsible use of social media platforms, further demonstrating its commitment to maintaining transparency and fairness in campaigns.

The latest advisory aligns with the ECI’s broader efforts to ensure a level playing field in elections, particularly through responsible use of AI and digital platforms. During the Global Election Leaders Summit (GELS) 2024, the Commission reiterated the importance of ethical practices in leveraging technology for electoral campaigns.

]]>
‘India Has Missed the GenAI Bus and No Amount of Funds Can Cover it’ https://analyticsindiamag.com/ai-features/india-has-quietly-lost-the-genai-bus-no-amount-of-funds-can-cover-it/ Thu, 16 Jan 2025 05:00:00 +0000 https://analyticsindiamag.com/?p=10161498 In 2024, the Indian tech landscape raised around $11.3 billion from investors, which is negligible compared to the West’s $184 billion.]]>

With each passing week in the global AI landscape, the goalpost for building generative AI and competing with players like Google and OpenAI seems to be ever-changing. Some years ago, Google released Transformers, followed by OpenAI’s ChatGPT. This year, the conversation is around agentic AI.

Despite throwing billions of dollars, India seems to be quietly losing out in the race because that is just not enough. In 2024, the Indian tech landscape raised around $11.3 billion from investors, which is negligible compared to the West’s $184 billion.

The only brighter side is that building a product in India is far cheaper than in the West, along with the availability of a vast and affordable talent pool.

HCLTech also revealed that it is aiming to integrate AI services for 100 clients by FY26. “Generative AI is getting…real. The cost of using an LLM or conversational model has dropped by over 85% since early 2023, making more use cases viable,” said CEO and MD Vijayakumar C. 

Despite this, nothing revolutionary has come out of India. “The pace of AI progress is so rapid that we simply cannot catch up by relying on engineers and researchers alone. Without big corporations or government backing, India’s GenAI dreams will remain just that – dreams,” said a researcher in a Reddit discussion titled ‘India has quietly lost the Gen-AI bus also, and no amount of investment will cover it now’.

Cost Drops for Services, Not for Building Products

Generative AI and quantum computing require billions in funding. While US giants like Google, OpenAI, Anthropic, and Microsoft lead the charge, and China uses open-source strategies to scale, India’s resources and interest in research are inadequate.

Many believe India has already missed the bus. A recurring theme in discussions with AIM is India’s lack of fundamental research in areas like Transformer architectures and their hardware execution. While countries like the US and China are making significant strides, India’s contribution remains negligible, which, to some extent, can be attributed to the lack of funding.

Developing generative AI models demands immense capital, yet India’s private sector remains reluctant to invest in long-term research. This is clearly highlighted in the earnings calls of the country’s big IT firms.

Vedant Maheshwari, CEO of quso.ai, believes foundational AI requires significant capital and patience, which is harder to secure in India. “While funding here is substantial, it’s mostly application-focused rather than foundational,” he explained. 

A student from a premier Indian institute observed, “The research output from China in just the past two years has placed them decades – if not a century – ahead of us.” 

The volume and quality of papers emerging from the US and Chinese institutions reflect a culture that prioritises innovation over mere service delivery. While only 1.4% of papers from India contributed to top AI research conferences, the US and China accounted for 30.4% and 22.8%, respectively.

The Indian government, possibly constrained by limited budgets, struggles to fill the gap.

This sentiment is echoed across the board. For instance, a quantum computing researcher shared how a company offered them just ₹20,000 to conduct advanced research.

So What’s the Point?

While speaking with AIM, several industry leaders agreed that there was no point in competing to build the largest LLM. To put this in perspective, TCS chief K Krithivasan recently said that there is no huge advantage in building its own LLMs in India since there are already so many available.  

This aligns with the idea of Nandan Nilekani, co-founder of Infosys, making India the AI use case capital of the world

The reason is simple – lack of capital. “Who will give $200 million to a startup in India to build an LLM?” Mohandas Pai, head of Aarin Capital and former CFO of Infosys, told AIM when asked about the lack of innovation from Indian IT

“Why is nothing like Mistral coming from India?” he asked rhetorically. “There is nobody…Creating an LLM or a big AI model requires large capital, time, a huge computing facility, and a market. All of which India does not have.”

Though India has startups like Sarvam, TWO, and Krutrim building products, the impact that they have created when compared to something like ChatGPT is minuscule, simply due to the vast difference in investments.

Despite this, there are predictions that India will have around 100 AI unicorns over the next decade.

To put things into perspective, Anthropic is looking to raise $2 billion in a funding round, raising its valuation to $60 billion. In comparison, Krutrim raised a $50 million round, and Sarvam AI raised $41 million. 

While speaking at Cypher 2024, Pai called on the Indian government to significantly increase its investment in AI. He pointed out that although the central government spends ₹90 lakh crore annually, only ₹3,000 to ₹4,000 crore is allocated for innovation – a sum he referred to as “peanuts”. “The government of India should invest ₹50,000 crore in AI.” If that happens, the Indian tech ecosystem will probably struggle with funds. 

Focus on Short Term

India’s tech sector continues to prioritise short-term gains from outsourced IT services rather than investing in creating globally competitive products. Indian startups are also busy making API wrappers for SaaS instead of pushing the boundaries of core research, which is all because of funding.

Amit Sheth, the chair and founding director of the Artificial Intelligence Institute of South Carolina (AIISC), earlier told AIM that only a handful of universities are able to publish research at top conferences

“In the USA, all the projects they get to work on involve advancing the state-of-the-art (research),” Sheth added. He also highlighted the issue of a publication racket prevalent in India and several other developing countries, with only a handful of researchers from select universities standing out as exceptions.

India’s elite institutions, such as the Indian Institute of Science (IISc), are also hamstrung by limited budgets. Notably, the institute’s entire budget is around ₹1,000 crore, which is barely enough to compete with global AI research.

India’s academic framework, especially in engineering and technology, is increasingly criticised for emphasising quantity over quality. Students are often required to publish multiple research papers, many of which lack originality. 

Despite the gloomy outlook, some believe there’s hope for the future if immediate corrective measures are taken. India needs a paradigm shift in its approach to education, funding, and research. The global race in generative AI is a high-stakes game, and India appears to be losing.

]]>
Accenture Hits Record $4.2 Billion in Generative AI Sales https://analyticsindiamag.com/ai-news-updates/accenture-hits-record-4-2-billion-in-generative-ai-sales/ Fri, 20 Dec 2024 04:39:28 +0000 https://analyticsindiamag.com/?p=10144020 In its last earnings call in September 2024, Accenture announced $1 billion revenue from generative AI, which was a jump from $900 million in the previous quarter.]]>

Accenture in its latest earnings call Q1 FY25 reported record generative AI bookings of $1.2 billion. Bringing the total to $4.2 billion since September 2023, this marks the highest quarterly bookings in the segment, reflecting growing client investments in generative AI.

In its last earnings call in September 2024, Accenture announced $1 billion generative AI sales, which was a jump from $900 million in the previous quarter. The company was the first in its industry to disclose generative AI deal values. In June 2023, it reported $100 million in pure-play generative AI projects for the quarter. The sales are almost double as reported in the last quarter.

“It’s not really different from the kinds of productivity that we’ve been experiencing. And here, of course, there’s an added wrinkle in that generative AI. In order for us to use it with our clients, they have to allow us to use it and they have to prioritise,” said Julie Sweet, Accenture’s CEO, during the company’s post earnings call.

She emphasised that organisations must first invest in building robust data foundations before scaling AI initiatives. “We do not currently see an improvement in overall spending by our clients, particularly on smaller deals. When those market conditions improve, we will be well positioned to capitalise on them as we continue to meet the demand for the critical programmes our clients are prioritising.”

In the August 2024 quarter, Accenture secured $1 billion in generative AI orders, bringing its yearly total to $3 billion. The segment accounted for 6.4% of its overall $18.7 billion bookings for the quarter.

Accenture has also added 24,000 employees in the quarter, bringing its total workforce to 799,000, with a significant portion of hiring concentrated in India. Sweet added that the company has steadily increased its data and AI workforce, reaching approximately 57,000 practitioner.

Despite its strong performance, Accenture maintained a cautious outlook on the global economy. The company’s revenue for the first quarter stood at $17.69 billion, a 7.8% sequential increase, leading to a revision in full-year growth projections to 4-7%, up from 3-6%.

Indian IT firms, however, are yet to disclose such revenues from generative AI.

In October, Accenture partnered with NVIDIA to launch Accenture NVIDIA Business Group, aimed at helping enterprises scale AI adoption. This initiative includes training for 30,000 professionals globally to assist clients in reinventing processes and expanding the use of enterprise AI systems.

]]>
Composable Architectures Are Non-Negotiable https://analyticsindiamag.com/ai-highlights/composable-architectures-are-non-negotiable/ Mon, 16 Dec 2024 09:50:53 +0000 https://analyticsindiamag.com/?p=10143637 The modularity of composable architectures enables a low-code or no-code approach, opening AI development to a wider audience and accelerating the adoption of generative AI across industries.]]>

In the rapidly evolving world of generative AI applications, composable architecture is emerging as a framework for building scalable AI-powered applications. Integrating independent, modular components via APIs enables developers to create customised solutions quickly. 

“When we talk about composable architecture, we really mean building larger systems by assembling smaller, independent modules that can be easily swapped in and out,” explained Jaspinder Singh, principal consultant at Fractal, during an interaction with AIM

Designing for Scale with Composable Architecture 

This modular approach enables developers to build applications by combining several smaller modules, providing flexibility, scalability, and enhanced control over the deployment of AI solutions. 

“Not every project needs this approach,” Singh pointed out. “If you’re building something small and straightforward, you might be overthinking it if you try to make everything modular. But if you are planning to scale up, if lots of people will use your application, that’s when composable architecture really shines.” 

Scalability is one of the primary benefits of composable architectures. Singh emphasised that individual modules can be scaled independently based on demand, optimising resources for generative AI applications.

For example, a data processing module might require more frequent scaling than a front-end user interface. This selective scaling manages costs by avoiding unnecessary resource allocation.

The composable architecture is designed to allow for rapid experimentation and granular control, which are especially useful in the fast-paced world of generative AI. With new AI models appearing frequently, the ability to integrate and test them with minimal disruption to the overall system ensures that applications stay relevant and current.

The composable paradigm also allows a balance between custom development and leveraging off-the-shelf modules. Using modular APIs and established components for routine tasks allows developers to focus on refining specific business logic, reducing time to market and enabling faster iterations. 

“Companies can’t afford to spend months building everything from the ground up anymore,” according to Singh. “These modular components let you move quickly and stay competitive, especially in fast-paced tech-powered industries.”

Integrating Foundation Models in Generative AI Systems

Foundation models serve as fundamental building blocks in composable generative AI systems. These models, which serve as a base layer, can be fine-tuned or augmented for specific tasks, providing a versatile starting point within modular applications. 

A content creation system exemplifies this flexibility: organizations can integrate GPT-4 for text generation alongside image generation models like Flux-pro, resulting in a seamless workflow. This modular approach enables strategic combinations of best-fitting AI capabilities.

According to Singh, the output from each model can be routed to specialised modules for further processing, such as plagiarism detection, grammar correction, or style enhancement. This results in a robust but flexible workflow in which each component performs its specialised function while maintaining system cohesion. 

The architecture excels in adaptability, and organizations can improve or replace individual components as technology evolves, ensuring that their AI systems remain current without requiring complete rebuilds. 

Architecting Better Prompt Management

Prompt engineering is critical in generative AI applications, but managing and optimising prompts at scale poses significant challenges. Composable architecture addresses this issue by treating prompt management as a separate module within the overall system.

“We have seen organisations struggle with prompt consistency and version control,” Singh points out. “By incorporating a centralised prompt library into the composable architecture, teams can standardise their approaches while remaining flexible. This is especially useful when combined with experimentation features like A/B testing prompts, models, and data variations.”

The composable architecture enables this structured approach to prompt management, monitoring, and model evaluation by allowing developers to manage each of these activities within their own modules.

Security and Compliance in Composable Generative AI Systems

While composable architectures provide increased flexibility, they also present unique security and compliance challenges. The distributed nature of modular generative AI systems requires data security management, as sensitive data may flow through multiple modules. 

Compliance with data protection laws is critical, especially when data needs to move beyond an organisation’s infrastructure. In such cases, only necessary data should be transferred, with all confidential information handled securely in-house.

Moreover, generative AI models may be vulnerable to adversarial attacks, in which malicious inputs attempt to manipulate model behaviour. Singh recommends that input and output vetting should be a regular part of the composable AI pipeline, along with secure communication channels and access control mechanisms. A strong data governance framework, as well as regular security audits, help to ensure the security of application environments.

Composing the Way Forward

The flexibility of composable architectures offers a promising path forward for generative AI applications. As standardised interfaces evolve, Singh highlights that organisations can avoid vendor lock-in and experiment with competing AI solutions to find those best suited to their needs. 

The modularity of composable architectures facilitates a low-code or no-code approach, making AI development more accessible and accelerating the adoption of generative AI across industries.

However, implementing composable architectures can be challenging. Integrating multiple modules and transitioning from experimental to production environments presents challenges, especially as AI tools and technologies advance rapidly. Data privacy, intellectual property rights, and model reliability remain key areas of focus, demanding ongoing attention as organisations scale their generative AI applications.

Singh recommends comprehensive monitoring throughout the AI application lifecycle, from ideation to deployment, to ensure that modular generative AI systems operate seamlessly. Observability frameworks and GenAIOps practices can track metrics such as model accuracy, application performance, and cost efficiency. This would provide a comprehensive view of the system’s health and aid in the development of generative AI solutions that are both reliable and effective.

By embracing composable architectures, organisations can position themselves to adapt swiftly to AI’s evolving landscape, benefiting from the enhanced flexibility, scalability, and security that modular systems provide.

]]>
How AI Dragons Set GenAI on Fire This Year https://analyticsindiamag.com/deep-tech/how-ai-dragons-set-genai-on-fire-this-year/ Wed, 27 Nov 2024 09:30:00 +0000 https://analyticsindiamag.com/?p=10141761 Predictions for 2025 suggest that AI will become mainstream, speeding up the adoption of cloud-based solutions across industries. ]]>

If you thought the buzz around AI would die down in 2024, think again. Persistent progress in hardware and software is unlocking possibilities for GenAI, proving that 2023 was just the beginning.

2024 – the Year of the Dragon — marks an important shift as GenAI becomes deeply woven into the fabric of industries worldwide. Businesses no longer view GenAI as just an innovative tool. Instead, it is being welcomed as a fundamental element of their operational playbooks. CEOs and industry leaders, who recognise its potential, are now focused on seamlessly integrating these technologies into their key processes.

This year, the landscape evolved rapidly and generative AI became increasingly indispensable, progressing from an emerging trend to a fundamental business practice.

Scale and Diversity

An important aspect is the growing understanding of how GenAI enables both increased volume and variety of applications, ideas and content. 

The overwhelming surge in AI-generated content is leading to consequences we are just starting to uncover. According to reports, over 15 billion images were generated by AI in one year alone – a volume that once took humans 150 years to achieve. This highlights the need for the internet post-2023 to be viewed through an entirely new lens.

The rise of generative AI is reshaping expectations across industries, setting a new benchmark for innovation and efficiency. This moment represents a turning point where ignoring the technology is not just a lost opportunity, but could also mean falling behind competitors.

“The top open source models are Chinese, and they are ahead because they focus on building, not debating AI risks,” said Daniel Jeffries, chief technology evangelist at Pachyderm. 

China’s success is underpinned by its focus on efficiency and resource optimisation. With limited access to advanced GPUs due to export restrictions, Chinese researchers have innovated ways to reduce computational demands and prioritise resource allocation. 

“When we only have 2,000 GPUs, the team figures out how to use it,” said Kai-Fu Lee, AI expert and CEO of 01.AI. “Necessity is the mother of innovation.” 

He further highlighted how his company transformed computational bottlenecks into memory-driven tasks, achieving inference costs as low as 10 cents per million tokens. “Our inference cost is one-thirtieth of what comparable models charge,” Lee further said. 

The rise of Chinese AI extends beyond its borders, with companies like MiniMax, ByteDance, Tencent, Alibaba, and Huawei targeting global markets. 

MiniMax’s Talkie AI app, for instance, has 11 million active users, half of whom are based in the US. 

At the Wuzhen Summit 2024, analysts noted that as many as 103 Chinese AI companies were expanding internationally, focusing on Southeast Asia, the Middle East, and Africa, where the barriers to entry were lower than the Western markets. 

ByteDance has launched consumer-focused AI tools like Gauth for education and Coze for interactive bot platforms, while Huawei’s Galaxy AI initiative supports digital transformation in North Africa. 

AI Video Models 

Models like Kling and Hailuo have outpaced Western competitors like Runway in speed and sophistication, which represents a shift in leadership in this emerging domain. This is reflected in advancements in multimodal AI, where models like LLaVA-o1 rival OpenAI’s vision-language models by using structured reasoning techniques that break down tasks into manageable stages.

The Rugged Boundary

In 2023, it became clear that generative AI is not just elevating industry standards, but also improving employee performance. According to a YouGov survey, 90% of workers agreed that AI boosts their productivity. Additionally, one in four respondents use AI daily, with 73% using it at least once a week.

Another study revealed that when properly trained, employees were able to complete 12% of tasks 25% faster with the assistance of generative AI, while the overall quality of their work improved by 40%. The greatest improvements were seen among low-skilled workers. However, for tasks beyond AI’s capabilities, employees were 19% less likely to produce accurate solutions.

This dual nature has led to what experts call the ‘jagged frontier’ of AI capabilities.

On one side, AI now performs impressive abilities and tasks with remarkable accuracy and efficiency that were once deemed beyond machines’ reach. On the other hand, however, it struggles with tasks that require human intuition. These areas, defined by nuance, context, and complex decision-making, are where the binary logic of machines currently falls short.

Cheaper AI

As enterprises begin to explore the frontier of generative AI, we might see more AI projects take shape and become standard practice. This shift is driven by the decreasing cost of training LLMs, thanks to advancements in silicon optimisation, which is expected to halve every two years. Alongside growing demand and global shortages, the AI chip market is set to become more affordable in 2024, with new alternatives to industry leaders like NVIDIA emerging.

Moreover, new fine-tuning techniques such as self-play fine-tuning are making it possible to strengthen LLMs without relying on additional human-defined data. These methods use synthetic data to develop better AI with fewer human interventions.

Unveiling the ‘Modelverse’

The decreasing cost is enabling more companies to develop their own LLMs and highlighting a clear trend towards accelerating innovation in LLM-based applications in the next few years.

By 2025, we will likely see the emergence of locally executed AI instead of cloud-based models. This shift is driven by hardware advances like Apple Silicon and the untapped potential of mobile device CPUs.

In the business sector, SLMs will likely find greater adoption by large and mid-sized enterprises because of their ability to address niche requirements. As implied by their name, SLMs are more lightweight than LLMs. This makes them perfect for real-time applications and easy integration across various platforms.

While LLMs are trained on massive, diverse datasets, SLMs concentrate on domain-specific data. In such cases, the data is often from within the enterprise. This makes SLMs tailored to industries or use cases, thereby ensuring both relevance and privacy. 

As AI technologies expand, so do concerns about cybersecurity and ethics. The rise of unsanctioned and unmanaged AI applications within organisations, also referred to as ‘Shadow AI’, poses challenges for security leaders in safeguarding against potential vulnerabilities.

Predictions for 2025 suggest that AI will become mainstream, speeding up the adoption of cloud-based solutions across industries. This shift is expected to bring significant operational benefits, including improved risk assessment and enhanced decision-making capabilities.

Organisations are encouraged to view AI as a collaborative partner rather than just a tool. By effectively training ‘AI dragons’ to understand their capabilities and integrating them into workflows, businesses can unlock new levels of productivity and innovation.

The rise of AI dragons in 2024 represents a significant evolution in how AI is perceived and utilised. As organisations embrace these technologies, they must balance innovation with ethical considerations, ensuring that AI serves as a force for good.

]]>
From POC to Product: Measuring the ROI of Generative AI for Enterprise https://analyticsindiamag.com/ai-highlights/from-poc-to-product-measuring-the-roi-of-generative-ai-for-enterprise/ Wed, 13 Nov 2024 06:30:00 +0000 https://analyticsindiamag.com/?p=10140838 Measuring the ROI of GenAI investments is not as straightforward as calculating the savings from a new software tool.]]>

The years 2023 and 2024 have been game-changers in the world of AI. What initially started as a subtle shift towards automation has now turned into a full-blown revolution, disrupting traditional ways of doing business. Generative AI is no longer seen as just an extension of AI but as a distinct technology with diverse applications. 

Vijay Raaghavan, the head of enterprise innovation at Fractal, highlights this transformation, particularly focusing on how organisations are now moving beyond experimentation to actively invest in generative AI solutions and maximise their value.

Consumers Lead, Enterprises Follow

Interestingly, Raaghavan noted that the early traction for GenAI wasn’t driven by businesses but by consumers. The virality of tools like ChatGPT caught the attention of millions, compelling enterprises to take notice. Since it only took a few weeks for ChatGPT to reach 100 million users, the business world couldn’t ignore the potential of generative AI, and more specifically, LLMs.

Soon, enterprises began experimenting with LLMs, and some eventually started building generative AI solutions. After two years of ChatGPT, it is no longer an experiment but becoming a reality.

“Leaders in the boardrooms began to ask whether their organisations should start building GenAI products,” Raaghavan said.

POCs to Real-World Applications

By late 2023 and into 2024, the GenAI landscape experienced yet another shift. By now, what began as exploratory proof-of-concept (POC) projects with conversational AI tools and chatbots had turned into serious discussions about investment. If 2023 was a breakout year, 2024 turned out to be the build-out year.

At present, the conversation has veered away from experimentation to figuring out whether GenAI can be turned into a product for the company or if it’s just plug-and-play. This transition from POCs to real-world applications has presented new challenges, particularly when it comes to measuring value, which is still the toughest part of driving investment in generative AI.

This is the “moment of truth” for enterprises. “During the experimentation phase, companies asked if it made sense for their organisations. Now that they’ve moved past that, the question is about return on investment (ROI),” Raaghavan pointed out.

Possibly, 2025 and beyond will be about scaling these investments and realising their full potential. “We’ve moved from POCs to full-scale deployment. The next step is value maximisation,” he said. This is also visible among several Indian enterprises and IT giants as they increasingly push POCs to products for their clients.

Quantifying ROI: From FTEs to Conversion Rates

Measuring the ROI of GenAI investments is not as straightforward as calculating the savings from a new software tool. It involves a blend of quantitative and qualitative factors, from time saved to human value unlocked. “Whenever you talk about any investment, the CEO conversation is all about value and ROI.”

As the world speaks of replacing workers with generative AI, the most basic form of value measurement is productivity gains, typically measured in hours saved or full-time employees (FTEs) freed up. People aren’t discussing replacing people outrightly because it’s a sensitive topic, but some leaders are reallocating roles. 

For example, many content writers are becoming content reviewers because GPT models can generate drafts which humans just need to review and refine.

This shift is what Raaghavan describes as “human value unlocking”. GenAI allows organisations to elevate employees from mundane tasks to higher-order roles, which can lead to a more engaged workforce. While AI performs the redundant tasks, humans have elevated to performing more meaningful roles.

While some aspects of GenAI’s value are difficult to quantify, tangible metrics are emerging, particularly around FTE savings. Some organisations are measuring how many FTEs have been saved by introducing GenAI. For example, if a task previously required 10 full-time employees, introducing GenAI might save two, freeing them up for other projects.

In addition to FTE savings, companies also measure digital engagement and conversion rates, especially in sectors like e-commerce. Organisations use metrics like percentage engagement and conversion to measure the impact of GenAI. For instance, a consumer might use GenAI to make a more informed purchase decision faster, which improves conversion rates.

With so many companies adopting GenAI, staying competitive requires strategic investment. Raaghavan outlines a multi-layered approach: “Generative AI is not a plug-and-play solution. It requires the right data, hyperscale strategy, and long-term commitment.”

]]>
Rabbitt AI Announces Strategic Applications of Generative AI in Defense https://analyticsindiamag.com/ai-news-updates/rabbitt-ai-announces-strategic-applications-of-generative-ai-in-defense/ Mon, 11 Nov 2024 12:57:59 +0000 https://analyticsindiamag.com/?p=10140786 Rabbitt AI’s technology integrates deep learning models with diverse sensor inputs—from infrared and radar to audio and video—to detect unauthorised intrusions, environmental anomalies, and abnormal activities without human intervention. ]]>

Indian AI startup Rabbitt AI has launched a suite of GenAI tools to reshape military operations by minimising human involvement in high-risk zones. 

The core idea centres on reducing human exposure to danger. GenAI-powered drones, autonomous vehicles, and surveillance systems enable real-time threat detection and response, offering a safer, AI-driven alternative to traditional security methods. 

By incorporating diverse sensor data—from infrared and radar to audio and visual feeds—Rabbitt’s models detect unauthorised movements, environmental anomalies, and abnormal activities without human intervention.

“We are not very far from a future where AI with limbs can dominate battlefields,” said Harneet Singh, Rabbitt AI’s chief, who was previously an AI consultant to the South Korean Navy.

“One of our key missions is to protect lives at the borders by creating situationally aware, autonomous AI systems that can respond to threats by observing and analyzing sensor data in real-time,” he added. 

Rabbitt AI’s technology integrates deep learning models with diverse sensor inputs—from infrared and radar to audio and video—to detect unauthorised intrusions, environmental anomalies, and abnormal activities without human intervention. 

Singh highlighted the technology’s autonomy, saying, “With AI-powered systems, we can now provide uninterrupted, unbiased monitoring that ensures both coverage and efficiency, all while reducing operational costs.”

In addition to reducing personnel risks, Rabbitt’s GenAI tools also help streamline resources by automating many surveillance functions. The technology minimizes reliance on human labor, which Singh says “not only reduces costs but increases accuracy, freeing military personnel to focus on strategic tasks.” The system’s AI-driven detection capabilities also lower the need for costly corrective actions.

Rabbitt AI is also advancing “human-machine teaming” by pairing GenAI with unmanned drones and ground vehicles to increase adaptability in hard-to-reach terrains. According to Singh, “This tech enables real-time situational awareness, allowing command centres to get immediate insights without the delay of human reporting, even in complex environments like urban areas or mountainous regions.”

Singh, an IIT Delhi alumnus recognised by DRDO and Indian military officials with an honorary medal, emphasised Rabbitt AI’s broader vision for defence. “Our work goes beyond developing AI models,” he said. “We are building a defence ecosystem where AI serves as a force multiplier, enhancing every soldier’s capabilities while increasing situational awareness and reducing decision-making time.”

Founded by Singh, Rabbitt.ai focuses on generative AI solutions, including custom LLM development, RAG fine-tuning, and MLOps integration. The company recently raised $2.1 million from TC Group of Companies and investors connected to NVIDIA and Meta. 

The company recently appointed Asem Rostom as its Global Managing Director to lead expansion across the MENA and Europe regions. Before this role, Rostom served as the managing director at Simplilearn.

The company has also launched Rabbitt Learning, a new division focused on transforming education access and workforce readiness in the MENA region. As a part of its expansion, Rabbitt AI has opened a new office in Riyadh, Saudi Arabia, to meet the growing demand for Gen AI skilling courses and digital transformation projects in the Gulf countries.

]]>
Can Google Beat AI Rivals and Keep the Ad Cash Rolling? https://analyticsindiamag.com/ai-features/can-google-beat-ai-rivals-and-keep-the-ad-cash-rolling/ Tue, 05 Nov 2024 13:00:00 +0000 https://analyticsindiamag.com/?p=10140251 The rise of AI-driven search engines, driven by ChatGPT and Perplexity, poses a significant threat to Google’s search and ad revenue dominance. ]]>

Tech giant Google’s advertising business continues to drive its financial performance with the company posting the highest ever ad revenue of $65.85 billion for Q3 FY24. It constitutes nearly 75% of its total revenue. 

Amid the company’s expansion drive through Google Cloud and AI innovations and investments, advertising revenue remains the core of its operations. Ad revenue increased 10% year-on-year, indicating the company’s dominance in search-based and video ads. 

Search revenue alone contributed $49.39 billion to the total, a 12% increase from last year. YouTube also performed well, earning $8.92 billion for the quarter. 

Sundar Pichai, CEO of Alphabet which owns Google said, “The momentum across the company is extraordinary. Our commitment to innovation, as well as our long-term focus and investment in AI, are paying off with consumers and partners benefiting from our AI tools.”

QuarterAd Revenue (in millions)Year-over-Year % Change
Q3 2024$65.810.41%
Q3 2023$59.69.48%
Q3 2022$54.42.54%
Q3 2021$53.1

The company’s reliance on ad revenues is nothing new. Over the past few years, its ad revenues have been on an upward trajectory with Q3 marking the highest ever earnings. Google has successfully leveraged its search engine, user data and AI features to effectively manage the ad call-to-action behaviour. 

The company believes the ads on AI Overviews, a feature that summarises content on the search query and displays it under the search box, have allowed users to quickly connect with relevant businesses and services, thereby making the ad process more relevant. 

Rising Competition in the Ad Space 

However, its heavy reliance on ads makes its business challenging in a competitive landscape. The rise of AI driven search engines, driven by ChatGPT and Perplexity, poses a significant threat to Google’s search and ad revenue dominance. 

Both ChatGPT and Perplexity are releasing chrome extensions for their search engines. Besides, Google’s main competitor, Meta, too is reportedly entering the search engine space. The latter has shown consistent progress in the last two years with its ad revenue for Q3 FY24 touching $39.9 billion, an 18.7% jump year-on-year. 

Perplexity co-founder and CEO Aravind Srinivas posted on X about Google’s approach to raking up ad revenues despite his own company entering the search space. 

With these developments, Google cannot afford to lose its focus on the ad revenue nor ignore the emerging competitors.  

Source : X

Meta is developing an AI search engine to answer queries on Meta AI. Currently the company is relying on Google and Microsoft’s Bing for the same. It is obvious that dominant players are trying to build their ecosystem to ensure their customers stay on their portal with minimal dependency on competitors. 

Meta has the advantage of a large user base and data from Facebook and Instagram platforms, so training the AI search platform might not be problematic. Meta’s web crawler is already scraping data for AI training. The company has even partnered with publications such as Reuters to bring news-related answers. 

AI Powers Ads

Even as the company’s Q3 FY 2024 results surpassed analyst expectations in the top and bottom lines, with the consolidated revenues at $88.3 billion and Google Cloud revenue increased 35% to $11.4 billion, advertising income still remains a key growth driver. 

Google Cloud revenues was led by accelerated growth in Google Cloud Platform (GCP) across AI Infrastructure, Generative AI Solutions, and core GCP products.

Pichai credited their long-term focus and investment in AI as key drivers of success for the company and its customers, even highlighting the Gemini API’s 14x growth over the past six months.

Google claims that both customers and advertisers have found AI features to propel the user experience across its products and services. Advertisers have been using Gemini to build and test ad creatives at scale. 

Google’s latest text-to-image model, Imagen 3 was updated in Google Ads. The model was tuned with ads’ performance data across industries to aid customers with high-quality images for their campaigns. 

It’s interesting to note that AI-powered feature integration on search has also been economical. Pichai mentioned that when the company first began testing AI Overviews, they had lowered the machine costs per query ‘significantly,’ and now, in 18 months, the costs have been reduced by more than 90%. 

“AI is expanding our ability to understand the intent and connect it to our advertisers. This allows us to connect highly relevant users with the most helpful ad, and deliver business impact to our customers,” said Philipp Schindler, SVP and CBO at Google, on the earnings call. 

]]>
The Transformative Impact of Generative AI on IT Services, BPO, Software, and Healthcare https://analyticsindiamag.com/ai-highlights/the-transformative-impact-of-generative-ai-on-it-services-bpo-software-and-healthcare/ Tue, 22 Oct 2024 07:51:29 +0000 https://analyticsindiamag.com/?p=10139061 “As many as 91% of the respondents believe that GenAI will significantly boost employee productivity, and 82% see enhanced customer experiences through GenAI integration,” said the Technology Holdings panel while speaking at Cypher 2024, India’s biggest AI conference organised by AIM Media House.]]>

Technology Holdings, an award-winning global boutique investment banking firm dedicated to delivering M&A and capital-raising advisory services to technology services, software, consulting, healthcare life sciences, and business process management companies globally, recently launched its report titled “What Does GenAI REALLY Mean for IT Services, BPO, and Software Companies: A US $549 Billion Opportunity or Threat?

“As many as 91% of the respondents believe that GenAI will significantly boost employee productivity, and 82% see enhanced customer experiences through GenAI integration,” said Venkatesh Mahale, Senior Research Manager at Technology Holdings, while speaking at Cypher 2024. He added that in the BPO sector, GenAI is expected to have the biggest impact, particularly in areas such as automation and advanced analytics.

Speaking about the impact of generative AI in the IT sector, Sriharsha KV, Associate Director at Technology Holdings, said, “IT services today generate approximately one-and-a-half trillion dollars in revenue, a figure expected to double in the next eight to ten years.”

He added that Accenture, the number one IT services company in the world, has started disclosing GenAI revenues, and their pipeline is already at a half-billion run rate for the year. “The pipeline has scaled from a few hundred million last year to, I would say, 300 to 400%. That makes us strongly believe that GenAI is real.”

He noted that data centre and chip companies are part of the upstream sectors, as they are responsible for creating the generative AI infrastructure. In contrast, IT services companies are downstream but are gaining momentum in automating building processes using GenAI.

Sriharsha stated that generative AI has a notable impact on testing, debugging, DevOps, MLOps, and DataOps.

The panel at Cypher further discussed the growing trends in mergers and acquisitions (M&A) driven by GenAI. “2023 was a blockbuster year for funding in GenAI, with $20 to $25 billion infused into the sector,” Sriharsha said. This surge in investment has also translated into increased M&A activity, particularly in the IT services and BPO sectors. “We’ve seen numerous acquisitions focused on integrating GenAI capabilities into industry-specific operations,” he added.

Sriharsha explained that in the BPO sector, GenAI is particularly disrupting contact centres. “By automating up to 70% of calls through a combination of chat, email, and voice interactions, companies can operate with fewer agents while maintaining service quality,” he said. This efficiency allows organisations to redirect resources to higher-value tasks, reshaping the way BPOs operate.

Enhancing Healthcare with GenAI


“India has a population of around 1.4 billion, but there is still a dearth of doctors and nurses,” said Anant Kharad, Vice President at TH Healthcare & Life Sciences. He added that generative AI has several use cases in the healthcare industry that can help solve these problems.

“GenAI will analyse my medical records and try to identify the issues I faced in the past and what I’m experiencing now. It will create a summary of all that and then provide it to the nurse for review, who will handle the initial treatment for the outpatient department. The doctor can then take it from there instead of nurses going through tons of paperwork,” he explained.

He said that this not only enhances patient care but also optimises healthcare workflows, allowing medical staff to focus on more complex cases. Moreover, he added that GenAI is playing a vital role in drug discovery and patient care strategies. “It is working with companies that reverse Type 2 diabetes,” Kharad shared. “It has used machine learning to analyse data from thousands of patients, creating effective treatment curricula that can be rolled out globally,” he said.

The Long-Term Implications of Generative AI

As companies navigate the potential disruptions brought on by generative AI, the long-term impacts on business models and service offerings cannot be overlooked. According to Kharad, the need for traditional models, like manual contact centres, is already being questioned in the BPO sector.

“Testing and debugging in IT services are also being challenged,” he said, suggesting that companies must evolve or risk obsolescence. The healthcare sector, however, appears poised for positive disruption through the application of generative AI. Kharad shared specific examples of how AI can enhance efficiency, especially in diagnostics.

“For instance, instead of a radiologist reading 20 reports a day, AI could enable them to process 100 reports,” he explained. This not only increases operational efficiency but also optimises resource allocation in a sector often constrained by staff shortages.

Furthermore, Kharad pointed out that major players like Amazon are already using generative AI to automate prescription orders based on data inputs. “If AI can handle 90% of the workload, it will reduce costs and provide faster service for patients,” he said.

Kharad further elaborated on the healthcare sector’s response to M&A trends, noting that biotech and health-tech companies are at the forefront. “Pharmaceutical companies in India are partnering with start-ups to drive innovation in drug discovery,” he said. 

For those interested in exploring the implications of generative AI further, Technology Holdings has launched a comprehensive report on its impact on IT services, BPOs, and software companies. The report can be accessed here.

]]>
Adobe Launches Content Authenticity Web App to Protect Creators’ Work from Generative AI Misuse https://analyticsindiamag.com/ai-news-updates/adobe-launches-content-authenticity-web-app-to-protect-creators-work-from-generative-ai-misuse/ Tue, 08 Oct 2024 13:00:00 +0000 https://analyticsindiamag.com/?p=10137790 Adobe’s web app includes a feature that allows creators to signal whether they want their work to be used by AI models. ]]>

Adobe has unveiled the Adobe Content Authenticity web app, a free tool designed to protect creators’ work and ensure proper attribution. This new app enables users to easily apply Content Credentials—metadata that serves as a “nutrition label” for digital content—ensuring their creations are safeguarded from unauthorised use. 

Supported by popular Adobe Creative Cloud apps such as Photoshop, Lightroom, and Firefly, Content Credentials provide key information about how digital works are created and edited, offering creators ways to claim ownership and protect their creations.

The company launched its Content Authenticity Initiative in 2019. With over 3,700 members backing this industry standard, the initiative aims to combat misinformation and AI-generated deepfakes. Adobe’s new web app builds on this legacy, offering a centralised platform where creators can apply, manage, and customise their Content Credentials across multiple files, from images to audio and video.

Enhancing Creator Control 

A recent Adobe study revealed that 91% of creators want a reliable method to attach attribution to their work, with over half expressing concerns about their content being used to train AI models without their consent. In response, Adobe’s web app includes a feature that allows creators to signal whether they want their work used by AI models, ensuring their rights are respected.

“Adobe is committed to responsible innovation centered on the needs and interests of creators,” said Scott Belsky, chief strategy officer at Adobe. “By offering a simple, free way to attach Content Credentials, we are helping creators preserve the integrity of their work, while enabling a new era of transparency and trust online.”

The app also offers features such as batch credential application and the ability to inspect content for associated credentials through a Chrome extension. This ensures that the information remains visible, even if platforms or websites fail to retain it.

With this new tool, Adobe is not only empowering creators to protect their work but is also driving a broader push for transparency across the digital ecosystem. The company has gone all in on generative AI. Last month, they introduced new features in Adobe Experience Cloud, including Adobe Content Analytics and real-time experimentation tools. The tool will help personalise, test, and evaluate AI-generated content across various channels while offering actionable insights to improve marketing performance and boost customer engagement.

]]>
Generative AI Cost Optimisation Strategies https://analyticsindiamag.com/ai-highlights/generative-ai-cost-optimisation-strategies/ Thu, 03 Oct 2024 07:21:49 +0000 https://analyticsindiamag.com/?p=10137322 As an executive exploring generative AI’s potential for your organisation, you’re likely concerned about costs. Implementing AI isn’t just about picking a model and letting it run. It’s a complex ecosystem of decisions, each affecting the final price tag. This article will guide you to optimise costs throughout the AI life cycle, from model selection […]]]>

As an executive exploring generative AI’s potential for your organisation, you’re likely concerned about costs. Implementing AI isn’t just about picking a model and letting it run. It’s a complex ecosystem of decisions, each affecting the final price tag. This article will guide you to optimise costs throughout the AI life cycle, from model selection and fine-tuning to data management and operations.

Model Selection

Wouldn’t it be great to have a lightning-fast, highly accurate AI model that costs pennies to run? Since this ideal scenario does not exist (yet), you must find the optimal model for each use case by balancing performance, accuracy, and cost.

Start by clearly defining your use case and its requirements. These questions will guide your model selection:

  • Who is the user?
  • What is the task?
  • What level of accuracy do you need?
  • How critical is rapid response time to the user?
  • What input types will your model need to handle, and what output types are expected?

Next, experiment with different model sizes and types. Smaller, more specialised models may lack the broad knowledge base of their larger counterparts, but they can be highly effective—and more economical—for specific tasks.

Consider a multi-model approach for complex use cases. Not all tasks in a use case may require the same level of model complexity. Use different models for different steps to improve performance while reducing costs.

Fine-Tuning and Model Customisation

Pretrained foundation models (FMs) are publicly available and can be used by any company, including your competitors. While powerful, they lack the specific knowledge and context of your business.

To gain a competitive advantage, you need to infuse these generic models with your organisation’s unique knowledge and data. Doing so transforms an FM into a powerful, customised tool that understands your industry, speaks your company’s language, and leverages your proprietary information. Your choice to use retrieval-augmented generation (RAG), fine-tuning, or prompt engineering for this customisation will affect your costs.

Retrieval-Augmented Generation

RAG pulls data from your organisation’s data sources to enrich user prompts so they deliver more relevant and accurate responses. Imagine your AI being able to instantly reference your product catalogue or company policies as it generates responses. RAG improves accuracy and relevance without extensive model retraining, balancing performance and cost efficiency.

Fine-Tuning

Fine-tuning means training an FM on additional, specialised data from your organisation. It requires significant computational resources, machine learning expertise, and carefully prepared data, making it more expensive to implement and maintain than RAG.

Fine-tuning excels when you need the model to perform exceptionally well on specific tasks, consistently produce outputs in a particular format, or perform complex operations beyond simple information retrieval.

We recommend a phased approach. Start with less resource-intensive methods such as RAG and consider fine-tuning only when these methods fail to meet your needs. Set clear performance benchmarks and regularly evaluate the gains versus the resources invested.

Prompt Engineering

Prompts are the instructions given to AI applications. AI users, such as designers, marketers, or software developers, enter prompts to generate the desired output, such as pictures, text summaries or source code. Prompt engineering is the practice of crafting and refining these instructions to get the best possible results. Think of it as asking the right questions to get the best answers.

Good prompts can significantly reduce costs. Clear, specific instructions reduce the need for multiple back-and-forth interactions that can quickly add up in pay-per-query pricing models. They also lead to more accurate responses, reducing the need for costly, time-consuming human review. With prompts that provide more context and guidance, you can often use smaller, more cost-effective AI models.

Data Management

The data you use to customise generic FMs is also a significant cost driver. Many organisations fall into the trap of thinking that more data always leads to better AI performance. In reality, a smaller dataset of high-quality, relevant data often outperforms larger, noisier datasets.

Investing in robust data cleansing and curation processes can reduce the complexity and cost of customising and maintaining AI models. Clean, well-organised data allows for more efficient fine-tuning and produces more accurate results from techniques like RAG. It lets you streamline the customisation process, improve model performance, and ultimately lower the ongoing costs of your AI implementations.

Strong data governance practices can help increase the accuracy and cost performance of your customised FM. It should include proper data organisation, versioning, and lineage tracking. On the other hand, inconsistently labelled, outdated, or duplicate data can cause your AI to produce inaccurate or inconsistent results, slowing performance and increasing operational costs. Good governance helps ensure regulatory compliance, preventing costly legal issues down the road.

Operations

Controlling AI costs isn’t just about technology and data—it’s about how your organisation operates.

Organisational Culture and Practices

Foster a culture of cost-consciousness and frugality around AI, and train your employees in cost-optimisation techniques. Share case studies of successful cost-saving initiatives and reward innovative ideas that lead to significant cost savings. Most importantly, encourage a prove-the-value approach for AI initiatives. Regularly communicate the financial impact of AI to stakeholders.

Continuous learning about AI developments helps your team identify new cost-saving opportunities. Encourage your team to test various AI models or data preprocessing techniques to find the most cost-effective solutions.

FinOps for AI

FinOps, short for financial operations, is a practice that brings financial accountability to the variable spend model of cloud computing. It can help your organisation efficiently use and manage resources for training, customising, fine-tuning, and running your AI models. (Resources include cloud computing power, data storage, API calls, and specialised hardware like GPUs). FinOps helps you forecast costs more accurately, make data-driven decisions about AI spending, and optimise resource usage across the AI life cycle.

FinOps balances a centralised organisational and technical platform that applies the core FinOps principles of visibility, optimisation, and governance with responsible and capable decentralised teams. Each team should “own” its AI costs—making informed decisions about model selection, continuously optimising AI processes for cost efficiency, and justifying AI spending based on business value.

A centralised AI platform team supports these decentralised efforts with a set of FinOps tools and practices that includes dashboards for real-time cost tracking and allocation, enabling teams to closely monitor their AI spending. Anomaly detection allows you to quickly identify and address unexpected cost spikes. Benchmarking tools facilitate efficiency comparisons across teams and use cases, encouraging healthy competition and knowledge sharing.

Conclusion

As more use cases emerge and AI becomes ubiquitous across business functions, organisations will be challenged to scale their AI initiatives cost-effectively. They can lay the groundwork for long-term success by establishing robust cost optimisation techniques that allow them to innovate freely while ensuring sustainable growth. After all, success depends on perfecting the delicate balance between experimentation, performance, accuracy, and cost.

]]>
Accenture and NVIDIA Partner to Train 30,000 Professionals to Scale Agentic AI for Enterprises https://analyticsindiamag.com/ai-news-updates/accenture-and-nvidia-partner-to-train-30000-professionals-to-scale-agentic-ai-for-enterprises/ Wed, 02 Oct 2024 13:32:56 +0000 https://analyticsindiamag.com/?p=10137269 Accenture AI Refinery platform will help companies commence their custom agentic AI journeys using the full NVIDIA AI stack.]]>

Accenture and NVIDIA have expanded their partnership with the launch of a new Accenture NVIDIA Business Group, aimed at helping enterprises scale AI adoption. This initiative includes training for 30,000 professionals globally to assist clients in reinventing processes and expanding the use of enterprise AI systems. 

The new business group will leverage Accenture’s AI Refinery platform, which uses NVIDIA’s AI stack, to help companies accelerate their AI journeys. The AI Refinery will be available across public and private cloud platforms and aims to streamline AI-powered simulation, process reinvention, and sovereign AI.

Scaling Agentic AI Systems

Accenture’s AI Refinery is set to scale the next frontier of AI : agentic AI. “We are breaking significant new ground with our partnership with NVIDIA and enabling our clients to be at the forefront of using generative AI as a catalyst for reinvention,” said Julie Sweet, chair and CEO of Accenture. 

To support this initiative, Accenture is introducing a global network of AI Refinery Engineering Hubs in key regions, including Singapore, Tokyo, Malaga, and London. These hubs will focus on the large-scale development of AI models and operations. 

Jensen Huang, founder and CEO of NVIDIA, added, “AI will supercharge enterprises to scale innovation at greater speed.” This collaboration has already seen successful use cases, such as Indosat Group in Indonesia using agentic AI to develop industry-specific solutions in financial services.

Additionally, Accenture is debuting the NVIDIA NIM Agent Blueprint for virtual factory simulations, integrating NVIDIA Omniverse and Isaac software. Accenture’s marketing division has also begun using the AI Refinery platform with autonomous agents to streamline campaigns, achieving a 25-55% increase in speed to market.

Accenture has been on a role to adopt generative AI across their platform by providing training for upskilling opportunities to their employees. 

Agentic AI has been a hot topic of discussion across major tech providers over the last few weeks. From Oracle to Salesforce, major Saas players had unveiled a number of AI agentic products across their wide suite of products. There has also been a steady increase in providing autonomous databases for their customers. 

]]>
Embracing the Future: How Agentic Systems are Revolutionising Enterprises https://analyticsindiamag.com/ai-features/embracing-the-future-how-agentic-systems-are-revolutionising-enterprises/ Wed, 02 Oct 2024 05:30:00 +0000 https://analyticsindiamag.com/?p=10137207 Sriram Gudimella from Tredence shared with AIM some valuable insights into the potential of these advanced systems that are poised to change how enterprises function.]]>

Automation had already begun transforming industries before generative AI came into the picture. Now, the next frontier of innovation is marked by the rise of agentic systems, which are autonomous systems capable of dynamic decision-making, learning from feedback, and executing complex tasks with minimal human intervention.

Sriram Gudimella from Tredence shared with AIM some valuable insights into the potential of these advanced systems that are poised to change how enterprises function.

The distinction between traditional automation and agentic systems is profound. “Traditional automation is efficient at performing repetitive tasks but lacks the flexibility and learning capability of agentic systems,” Gudimella explained.

He emphasised that traditional automation systems require human intervention for updates or iterations, often leading to delays in incorporating feedback. In contrast, agentic systems operate autonomously, continuously learning from real-time data and user feedback, enabling ongoing improvements without the need for human oversight.

To simplify this concept, Gudimella likened traditional automation to a chess piece that can only move as instructed, while agentic systems act more like a chess master, strategically assessing the entire board and autonomously planning optimal moves. “A chess piece follows orders, but a chess master anticipates, adapts, and ensures the most valuable outcomes,” he added.

This analogy captures the essence of how agentic systems surpass traditional automation by leveraging autonomy and adaptability.

Real-World Applications of Agentic Systems

Agentic systems are already making an impact across various industries, from commodity trading to gaming, healthcare, logistics, and even agriculture.

Gudimella shared a compelling example from the commodity trading sector, where Tredence is helping a client develop an agentic system to make autonomous decisions based on factors like inventory levels, competitor information, and procurement rates.

“The goal is to create a system that can scale without human or subject-matter expert dependence, enabling seamless decision-making across multiple geographies,” he explained.

Meanwhile, Tredence is also assisting a gaming company in enhancing its decision-making processes. The agentic system analyses data on game performance across different geographies, determining which games and promotions are successful and why. This provides valuable insights that can be applied to future business strategies.

Agentic systems are also beginning to show promise in agriculture. “In advanced use cases, these systems are improving productivity and value by streamlining processes and optimising resources,” said Gudimella. The versatility of agentic systems allows them to be adapted for diverse applications, showcasing their potential to transform multiple industries.

Accelerating Digital Transformation

One of the most exciting aspects of agentic systems is their ability to accelerate digital transformation. “Tasks that used to take weeks can now be completed at the press of a button,” Gudimella noted. These systems break down complex tasks into smaller segments, assigning agents to handle specific elements while orchestrating the entire process.

This level of automation not only saves time but also ensures that resources are optimised, enhancing decision-making capabilities within organisations.

Agentic systems provide real-time insights by analysing vast amounts of data without the bias that often accompanies human decision-making. “They simulate different scenarios, testing various agents and tools to find the best solution in real-time,” Gudimella explained. This ability to quickly integrate diverse datasets and offer a holistic view allows businesses to make more informed, data-driven decisions.

However, despite their promise, the implementation of agentic systems is not without challenges. Gudimella highlighted the need for skilled professionals capable of designing, implementing, and managing these systems. “Not every organisation has the requisite skill sets to handle such complex technology,” he said.

Additionally, the cost of deploying agentic systems can be significant, particularly for organisations without experience in managing these advanced solutions.

Gudimella also stressed the importance of guardrails and governance to ensure the reliability and accuracy of these systems. While they are autonomous, businesses must establish mechanisms to prevent errors or misuse. “Guardrails are crucial to prevent the system from delivering irrelevant responses or losing the users’ confidence,” he emphasised.

Furthermore, ethical concerns surrounding agentic systems must be addressed, particularly regarding data privacy and accountability. When using LLMs in agentic systems, businesses need to ensure that the data used to train these models is ethically sourced and free from biases.

“Accountability is a major concern,” Gudimella noted, questioning who would be responsible for the decisions made by autonomous systems.

The Future of Agentic Systems

Agentic systems are set to play a pivotal role in shaping the future of enterprise automation. Gudimella believes that the rise of small, specialised companies leveraging AI and agentic systems will transform industries. “We are already seeing solopreneurs and small teams achieving phenomenal results with AI, and I believe this trend will continue to grow,” he said.

Shortly, companies will increasingly rely on AI agents to handle complex tasks, with fewer employees needed to manage these systems. “It’s all about building an ecosystem where each company provides specialised solutions, integrating with others to deliver comprehensive services,” Gudimella concluded.

]]>
Code Review Should Be Completely Taken Over by AI https://analyticsindiamag.com/ai-features/code-review-should-be-completely-taken-over-by-ai/ Mon, 30 Sep 2024 11:09:55 +0000 https://analyticsindiamag.com/?p=10136925 AI is just simply better at reviewing code than humans. Or is it?]]>

Writing code was not enough, now the AI world has decided to create tools that can monitor, edit, and even review code. Well it turns out that AI is actually better at reviewing code than humans, and computers are all you need. A brilliant example for this are tools like CodeAnt, CodeRabbit, or SonarQube, which take the task of reviewing code onto their own hands.

“Code reviews are dumb, and I can’t wait for AI to take over completely,” said Santiago Valdarrama, who believes that we are not far when the reviewing process might be completely automated. But it isn’t a take that doesn’t come with a lot of contention. 

“My colleagues approve my PRs without even looking at it,” he noted, highlighting a common issue in code reviews. For him, an automated solution would be welcome. “When you review code, most of the time, you have no idea what you are even reading.”

While speaking with AIM, Amartya Jha, the co-founder and CEO of CodeAnt AI said that developers spend 20 to 30% of their time just reviewing someone else’s code. “Most of the time, they simply say, ‘It looks good, just merge it,’ without delving deeper,” Jha explained. This leads to bugs and security vulnerabilities making their way into production.

Still he said that the quality of code generated by AI is still far from what humans do. But when it comes to reviewing the code, maybe AI could take over. Saurabh Kumar, another developer, argues, “I will let AI review my code when it can write better code than me—boilerplating doesn’t count.”

For better or worse, code reviews are part of the job

One of the key advantages of AI in code review is its ability to process vast amounts of data quickly, freeing up human developers to focus on higher-level tasks. As Mathieu Trachino pointed out, many code reviewers don’t actually dive deep into the code they’re supposed to evaluate. 

The debate ultimately boils down to whether AI can reach a level of understanding and context that is currently unique to human developers. Santiago Valdarrama pointed out that reviewing code is actually easier than writing it, implying that AI might be better suited to code review than code generation. However, some remain skeptical. 

Many developers like Trachino envision a future where AI can conduct code reviews more effectively than their human counterparts. Petri Kuittinen echoes this sentiment, noting that traditional line-by-line reviews are no longer cost-effective. 

While there’s optimism about AI taking over code reviews, many developers argue that a complete handover could overlook key human elements. Sebastian Castillo said, “Code review also serves to share knowledge between team members and as a way for more people to be familiar with the wider context of the product implementation,” highlighting that it is important for a human touch while reviewing code. 

AI can’t replace the collaborative learning and communication that occur during human-led code reviews. Which seems true. Recognising the benefits of AI but cautioning against eliminating human interaction entirely. 

Can AI Fully Replace Human Code Reviews?

Drawing a parallel between AI decision making and the Boeing 737 Max incidents, a user argued that AI enhancement of this is helpful, but it should not replace code reviews. “Boeing 737 Max programmers thought the same as well.”

In essence, AI lacks the capacity to understand the long-term strategic goals of a project. “For anything even remotely critical, my opinion is that AI code reviews are a terrible idea,” said a developer on a Reddit discussion

But this is also something that modern platforms like CodeAnt have addressed. One of CodeAnt AI’s standout features is its ability to allow enterprises to input their own data and create custom policies.

In complex systems where safety and security are critical, human oversight remains essential, but this is something that companies are focusing on improving. While AI can flag bugs right now, enforce style guides, or detect inefficiencies, the final call on whether code aligns with the larger system architecture and product vision should likely remain with humans—at least for the time being.

]]>
Fuel Your Data with Generative AI https://analyticsindiamag.com/ai-highlights/fuel-your-data-with-generative-ai/ Thu, 26 Sep 2024 06:12:56 +0000 https://analyticsindiamag.com/?p=10136649 Data is the fuel for generative AI. Vast amounts of data and the cloud’s crucial ability to store and process it at scale drove the rapid rise of powerful foundation models. If enterprises can corral their scattered data and make it all available, they can easily fine-tune these models or use retrieval augmented generation (RAG) […]]]>

Data is the fuel for generative AI. Vast amounts of data and the cloud’s crucial ability to store and process it at scale drove the rapid rise of powerful foundation models. If enterprises can corral their scattered data and make it all available, they can easily fine-tune these models or use retrieval augmented generation (RAG) to tailor them to their business needs.

However, the relationship between data and AI goes both ways. AI, too, can be used to improve and enhance your data and make it available for analysis.

While companies have invested heavily in data over the past few years, they often find that it hasn’t been enough. The rise of AI has drawn attention to the gaps in their data and the difficulties in accessing or interpreting it. Data may be isolated in organisational silos; it could be incomplete or poor in quality, making it difficult to work with. 

Below are three examples of using AI to fuel your data rather than vice-versa. Use cases like these may give you quick wins while also generating value from your data asset.

Reducing Extremely Tedious Labour (ETL!)

One of the most resource-intensive tasks in any data project, often consuming as much as 60-70% of the effort, is preparing and moving data to be used for analytics, also called the extract, transform, and load (ETL) processes. This is why AWS is working toward a zero-ETL future.

Fortunately, generative AI can be used to automatically analyse the source and target data structures, and then mapping one into the other. AWS’ generative AI coding assistant, Amazon Q Developer, can build data integration pipelines using natural language. This not only reduces the time and effort required but also helps maintain consistency across different ETL processes, making ongoing support and maintenance easier. 

Enterprises often have both structured (e.g., customer profiles and sales orders) and unstructured (e.g., social media or customer feedback) data held in a variety of data sources, formats, schemas, and types. The Amazon Q data integration in AWS Glue can generate ETL jobs for over 20 data common data sources, including PostgreSQL, MySQL, Oracle, Amazon Redshift, Snowflake, Google BigQuery, DynamoDB, MongoDB, and OpenSearch.

With generative AI for ETL and data pipelines, data engineers, analysts, and scientists can spend more time solving business problems and deriving insights from the data and less time laying out the plumbing. It is a generative AI use case that most enterprises can start right away.

Generative BI: Better Insights, Faster

We often speak of democratising data across an organisation, i.e., taking it out of the hands of the specialists and making it available to everyone. Data analysts and data scientists often find themselves burdened with large, complex projects, limiting their ability to deliver daily, actionable insights to everyone. A barrier to democratisation, however, is that not everyone has the skills to work rigorously and creatively with data. 

With generative AI, you can interact with your data using conversational queries and natural language without having to wait for someone to build reports and dashboards to find information, reducing time to value. For instance, a retail executive can ask, “What were our top-performing product categories last quarter, and what factors contributed to their success?” 

Regional supply-chain specialists at BMW Group, a global manufacturer of premium automobiles and motorcycles, have been using the generative AI assistant Amazon Q in QuickSight to swiftly respond to supply chain visibility requests from senior stakeholders, like board members.

Data has the power to influence change, but that requires compelling storytelling. Generative AI can make data easy to work with and enjoyable to use by creating visually appealing documents and presentations that bring the data to life. A side benefit is that it can help people across the organisation become more familiar with the data and its interpretation, making the data useful for more complex AI applications.

Synthetic Data: Get the Data You Want

As enterprises advance in analytics and AI, many realise they lack the data needed to support their newly envisioned use cases. And acquiring third-party data can be prohibitively expensive. Moreover, in regulated industries like healthcare and financial services, where data privacy and security are paramount, using actual customer data may not be possible. Data required to test edge cases in business processes is often limited.

This is where AI-generated high-fidelity synthetic data can come into use for testing, training, and innovation. It mimics the statistical properties and patterns of real datasets while preserving privacy and eliminating sensitive information. It can also be used to augment data for AI model training where data is scarce or sensitive. Besides, executives can use synthetic data for scenario planning to model various business situations and test strategies to mitigate and reduce risk. 

Merck, a global pharmaceutical company, uses synthetic data and AWS services to reduce false reject rates in their drug inspection process. The company has reduced its false reject rate by 50% by developing synthetic defect image data with tools like generative adversarial networks (deep learning models that pit two neural networks against each other to generate new synthetic data) and variational autoencoders (generative neural networks that compress data into a compact representation and then reconstruct it, learning to generate new data in the process). 

AI-generated synthetic data can unleash innovation and help in creating delightful customer experiences. Amazon One is a fast and convenient service that allows customers to make payments, present the loyalty card, verify their age, and enter the venue using only their palm. 

Amazon needed a large dataset of palm images to train the system, including variations in lighting, hand poses, and conditions like the presence of a bandage. The team even trained the system to detect highly detailed silicone hand replicas using AI-generated synthetic data. Customers have already used Amazon One more than three million times with 99.9999% accuracy.

AI and Data are Symbiotic

These three examples demonstrate how generative AI can be leveraged to unlock the potential of data, extracting value more quickly and demonstrating tangible wins with generative AI. From automating tedious data integration tasks to empowering business users with conversational analytics, generative AI can help teams work smarter, not harder. And by generating synthetic data for testing and innovation, one can fuel new ideas and capabilities that were previously out of reach. The key is not just to view your data as the fuel for generative AI, but also generative AI as a powerful new tool you can apply to your data. 

]]>
China Unveils New Draft Regulation to Regulate AI Generated Content https://analyticsindiamag.com/ai-news-updates/china-unveils-new-draft-regulation-to-regulate-ai-generated-content/ Tue, 24 Sep 2024 15:14:07 +0000 https://analyticsindiamag.com/?p=10136562 Meanwhile, India’s AI legislations are yet to be drafted, but would hopefully include watermarks and labels on AI-generated content.]]>

China is actively taking steps towards increasing transparency around AI generated information being rolled out on the internet. The Cyberspace Administration of China (CAC), the country’s National Internet Regulator, recently released a draft regulation that includes labeling instructions for AI-generated content. The regulation, titled as AI-Generated Synthetic Content Labeling Measures (人工智能生成合成内容标识办法(征求意见稿), targets on providers of AI-generated text, images, audio, and video to enhance transparency. This draft takes inspiration from laws such as the Cybersecurity Law and AI Service Management Provisions.

The goal is to introduce a unified standard for AI related content moderation and reduce the growing amount of misinformation and deepfakes on the internet. 

  • Explicit Labels: Visible marks such as disclaimers or watermarks must be placed on AI-generated text, images, audio, and video. For example, AI-generated videos need clear marks on the opening frame, while text must display disclaimers at appropriate points.
  • Implicit Labels: Hidden data, such as metadata or watermarks, must be embedded in AI-generated files. These markers contain information such as the content’s source, AI service provider, and a unique identifier. Implicit labels are not immediately visible but can be detected by platforms and authorities to verify content authenticity.

Implementing these regulations come with a financial barrier. Platforms like Xiaohongshu, Bilibili, Weibo, Douyin, and Kuaishou have implemented AI content declarations, but the same would be tough for smaller firms. This step is also an effort towards national and public security in China.

China vs India 

Beijing has always remained cautious of emerging technologies, and a step ahead in regulating them, especially when compared to the U.S or E.U. The director of CAC, Zhuang Rongwen, is an influential figure not just in China, but also in the West. 

Part of the Time’s 100 list released last month, Rongwen has consistently worked to implement the values of the Chinese Communist Party (CCP), and has actively ensured the country’s sway in the growing GenAI race against the West. When ChatGPT launched in 2022, China introduced legislation the following year to control the explosion of AI  wherein companies would require government approval before deploying models publicly. “China is very much ahead of the game in terms of self-regulating AI within their own nation-state,” said Sen. Mark Warner, in an interview to Politico last year on how China leads the world on AI rules, leaving the rest behind. 

In India, earlier this year, MeiTy issued an advisory (which was later revised) on the labeling of AI related content online – which came in after the Gemini AI fiasco.  Previously, the advisory also included companies to seek government approval before launching new models as well, which received a lot of criticism, stating that it would hinder innovation. 

]]>
How Generative AI is Transforming Intelligent Collections and Revolutionising Receivables https://analyticsindiamag.com/ai-highlights/how-generative-ai-is-transforming-intelligent-collections-and-revolutionising-receivables/ Mon, 09 Sep 2024 05:09:14 +0000 https://analyticsindiamag.com/?p=10134728 Designed to revolutionise the O2C process, Fractal's ICM shifts the paradigm from traditional month-end reporting to real-time actionable insights. ]]>

The order-to-cash (O2C) process, encompassing everything from order processing to receivables collection, is critical for an organisation’s financial health. For large enterprises, especially in sectors with thousands of daily transactions, managing accounts receivable can be a daunting challenge.

Traditionally, this process was handled manually, with finance teams relying on monthly reports and retrospective analyses. This often led to delayed responses, late payments, and missed opportunities for optimisation.

However, the advent of generative AI has introduced a new era of efficiency and accuracy. By leveraging the power of AI, companies are transforming the O2C process and redefining how receivables are managed.

“In 2017, this was a very descriptive and diagnostic-heavy process,” said Prathmesh Thergaonkar, Director, Finance Analytics at Fractal. “People used to look at end-of-month Excel reports and KPIs to analyse what was happening.” 

This retrospective approach meant that delays and issues were often missed until it was too late to address them effectively.

“As the technology landscape evolves, so must the receivables management process,” Thergaonkar noted. 

Fractal’s Generative AI Breakthrough

This is where Fractal’s latest innovation comes into place—the Intelligent Collections Management (ICM) accelerator for Accounts Receivable (AR) analytics. Designed to revolutionise the O2C process, ICM shifts the paradigm from traditional month-end reporting to real-time actionable insights. 

ICM stands out with its predictive-first approach, setting it apart from conventional AR analytics tools that rely on retrospective month-end reporting. 

By leveraging advanced ML algorithms, ICM delivers real-time predictions and insights. This enables businesses to forecast cash flows with unprecedented accuracy, identify potential payment defaults before they happen, and proactively manage collections efforts. 

With these predictive capabilities, organisations can optimise cash flow and enhance financial stability by taking timely actions.

Comprehensive KPI Framework

ICM’s analytical framework is robust, encompassing four KPIs—descriptive, diagnostic, predictive, and prescriptive. 

Descriptive KPIs: These provide insights into current and historical performance metrics such as ageing reports, days sales outstanding (DSO), and collection efficiency. This data visualisation helps users quickly assess the health of their AR portfolio.

Diagnostic KPIs: These metrics identify the root causes of issues like payment delays or disputes. By analysing trends in dispute reasons or deduction categories, users can address recurring problems at their source.

Predictive KPIs: ICM excels in predictive analytics, forecasting future payment behaviours, highlighting accounts at risk of delinquency, and estimating the likelihood of dispute resolutions. These insights enable users to prioritise efforts and allocate resources more effectively.

Prescriptive KPIs: Beyond prediction, ICM offers prescriptive recommendations, suggesting next-best actions to improve collections, resolve disputes, and reduce revenue leakage based on historical data and current conditions.

Seamless Data Integration & User-Centric Design

One of ICM’s standout features is its ability to integrate data from various sources, including ERP systems, local databases, planning tools, SharePoint, and Excel. This comprehensive data aggregation ensures access to a single source of truth, eliminating data silos and fostering cross-departmental collaboration. 

By harmonising data from different systems, ICM not only enhances analytics accuracy but also simplifies the user experience, allowing seamless navigation and exploration of insights.

ICM’s design caters to various user personas, from CXOs to analysts, with an intuitive interface that offers extensive slicing and dicing capabilities. Users can drill down into the most granular levels of data, whether seeking high-level summaries or detailed transaction-level insights. 

Customisable dashboards and reports allow users to focus on the metrics that matter most, while interactive visualisations make interpreting complex data straightforward.

Comprehensive Coverage of O2C Areas

ICM addresses all critical areas of the O2C process, including collections, deductions, leakage, disputes, and credit management, ensuring a holistic view of AR and enabling proactive issue resolution.

Collections: ICM optimises collections by prioritising accounts based on risk profile and payment history, streamlining the process with automated reminders and follow-ups, reducing DSO, and improving cash flow.

Deductions:  The tool leverages machine learning to categorise deductions, enabling users to understand underlying causes and take corrective actions. With Generative AI, it efficiently charges back invalid low-value deductions, helping minimise future occurrences and improve customer negotiations.

Leakage: ICM detects potential revenue leakage points, such as uncollected invoices or unauthorised discounts, allowing users to recover lost revenue and prevent future leakage.

Disputes: ICM accelerates dispute resolution by predicting the likelihood of successful outcomes and providing recommendations for effective resolution, reducing time and effort and enhancing customer satisfaction.

Credit Management: ICM helps users manage credit risk by evaluating customer creditworthiness and recommending dynamic credit limits, ensuring businesses extend credit to reliable customers while minimising bad debt risk.

Real-Time Insights and Autonomous Finance

ICM features a generative AI layer that enhances decision-making with real-time responses, combining graphical representations and business commentary. This AI-driven assistant provides forecasts, next-best actions, and contextual insights, driving efficiency across the board. 

Be it predicting payment dates or analysing dispute impacts on cash flow, the AI assistant delivers actionable intelligence that enhances user productivity.

ICM’s ultimate vision is to drive the O2C process towards an autonomous finance state, where human intervention becomes optional. Its advanced analytics and AI capabilities enable the tool to handle most decision-making processes independently, based on internal and external data. 

By automating routine tasks and providing intelligent insights, ICM frees finance professionals to focus on strategic initiatives, improving efficiency and empowering organisations to achieve greater agility and resilience.

]]>
Genpact’s Global Enterprise Leader Edmund DeLussey, on Breaking Through the Noise to Achieve Success with GenAI https://analyticsindiamag.com/ai-highlights/genpacts-global-enterprise-leader-edmund-delussey-on-breaking-through-the-noise-to-achieve-success-with-genai/ Thu, 05 Sep 2024 09:00:17 +0000 https://analyticsindiamag.com/?p=10134498 With the massive volume of AI solutions at play, separating actual impactful work from all the hype surrounding AI becomes crucial. “There’s just so much noise out there. And there’s so much innovation. It’s hard to tell fact from fiction,” said Edmund DeLussey, global enterprise leader at Genpact, during a keynote at AIM’s MachineCon Summit […]]]>

With the massive volume of AI solutions at play, separating actual impactful work from all the hype surrounding AI becomes crucial. “There’s just so much noise out there. And there’s so much innovation. It’s hard to tell fact from fiction,” said Edmund DeLussey, global enterprise leader at Genpact, during a keynote at AIM’s MachineCon Summit in New York. 

Five-Principle Approach

Genpact emphasises a strategic approach grounded in five fundamental principles to succeed with generative AI. 

The first is identifying the right projects by analysing vast amounts of data to uncover trends and evaluate outcomes. The company recommends creating a centre of excellence (CoE) to develop expertise, assess infrastructure needs, and select promising use cases. This approach has enabled Genpact to drive innovative solutions internally, as well as for its clients.

The second principle is embedding AI into processes. A seamless integration into existing workflows empowers decision-makers with real-time insights. A case in point is Genpact’s work with a media conglomerate, where they used GenAI to create a superintelligent assistant, boosting customer satisfaction and sales by providing real-time recommendations.

Prioritising data governance and responsible AI is the third principle. Establishing a strong data foundation and ethical practices enables fair and trustworthy AI solutions. Genpact’s responsible AI framework, illustrated by their collaboration with a global bank, demonstrates how sound governance can streamline processes and enhance transparency.

>> Read insights from 550 senior leaders at the forefront of the AI revolution

The fourth principle involves building a robust technical architecture, which includes scalable infrastructure and effective model management. Genpact’s work with a leading brewing company shows how a strong data platform can improve accuracy, productivity, and decision-making, driving digital transformation.

Finally, enabling a scalable operating model is crucial. Standardising processes and developing adaptable teams allow organisations to respond quickly to changing needs. Genpact’s work with a global retailer, which significantly reduced supplier disputes through enhanced data processes, highlights the effectiveness of this approach.

AI Across Sectors 

Besides discussing the length and breadth of AI applications across industries like life sciences, consumer goods, and automobiles, DeLussey also touched upon the challenge of bringing AI to a person, or rather, how AI would be accessible to people. 

“One of the things we need to be thinking about is how do we bring AI to a person? How do we do what Microsoft is trying to do with their Copilot that they just released in beta…? How do you use a tool like that to help with busy areas of finance [and other business functions]?” said DeLussey. 

AI for People

While talking about bringing AI to employees and customers, DeLussey mentioned that one of Genpact’s partners, in collaboration with Salesforce, was developing an AI service management platform for an auto manufacturer. 

The AI system aggregated data on individual car services, warranties and other relevant information. This helped service technicians quickly identify the correct repairs and avoid the hassle of a customer repeatedly talking about the vehicle’s service history or searching the web for those details. 

“This [AI consolidating and summarising data] is an area we’re really excited about. We think there’s going to be a ton of innovation in this space, and it’s going to enable many people to do better work,” said DeLussey. He believes the technology will not only boost productivity but also increase revenues. 

AI in Healthcare

DeLussey also shed light on AI in drug discovery and its advancement over the years. “In 2015, there was exactly one AI-designed drug in clinical trials, but by the end of 2023, there were 67. As of now, there’s not a single drug that has actually made it out into the market yet, but it’s only a matter of time,” he said. 

The FDA has approved approximately 700 models and developed new methods to streamline the process. With advancements in protein folding, such as Google DeepMind’s AlphaFold 3 which delivers a 50% prediction accuracy, AI models assisting drug discoveries will only accelerate. 

Interestingly, DeLussey brought sector-focused solutions that AI companies are working on to attention. In healthcare verticals, big-tech companies, such as Microsoft and Google, are building language models to assist in the medical field. Many startups, too, are working in this space. 

DeLussey also discussed how consumer product companies use AI to create new food items. For example, he emphasised how Unilever has developed zero-salt alternatives using different AI combinations and released a vegan version of Hellmann’s mayonnaise that tastes like regular mayonnaise. 

Moreover, consumer product companies have also been using AI to minimise waste and improve sustainability. DeLussey gave the example of chocolate manufacturer Mars, which is using AI to improve its sustainability strategies, like using recyclable paper wrappers instead of plastic.

AI as the Solution

Genpact is prioritising AI and positioning itself as an AI-first company. This means that whenever a company approaches them with a problem, they first ask their team to consider how AI can be applied to solve the problem.

Interestingly, Genpact’s AI-focused approach is paying off. In its Q2 2024 earnings, the company posted promising numbers with data-tech-AI contributing $546 million, or 46%, to the quarter’s total revenue.

However, DeLussey warned against a one-size-fits-all approach to AI. He noted that while some companies are leading with AI, others need to adopt it gradually. The key is to understand how AI can fundamentally transform work and then strategically implement it where it can deliver lasting value.

Watch the full keynote here.

]]>
Channel-Specific and Product-Centric GenAI Implementation in Enterprises Leads to Data Silos and Inefficiencies https://analyticsindiamag.com/deep-tech/channel-specific-and-product-centric-genai-implementation-in-enterprises-leads-to-data-silos-and-inefficiencies/ Tue, 03 Sep 2024 07:41:20 +0000 https://analyticsindiamag.com/?p=10134300 Pega employs ‘situational layer cake’, which, as a part of its exclusive centre-out architecture, helps adapt microjourneys for different customer types, lines of business, geographies, and more. ]]>

Organisations often struggle with data silos and inefficiencies when implementing generative AI solutions. This affects over 70% of enterprises today, but global software company Pegasystems, aka Pega, seems to have cracked the code by using its patented ‘situational layer cake’ architecture. 

This approach democratises the use of generative AI across its platform, allowing clients to seamlessly integrate AI into their processes. They can choose from any LLM service provider, including OpenAI, Google’s Vertex AI, and Azure OpenAI Services, thereby ensuring consistent and efficient AI deployment across all business units.

“Our GenAI implementation at the rule type levels allows us to democratise the use of LLMs across the platform for any use case and by mere configuration, our clients can use any LLM service provider of their choice,” said Deepak Visweswaraiah, vice president, platform engineering and side managing director at Pegasystems, in an interaction with AIM

Pega vs the World 

Recently, Salesforce announced the launch of two new generative AI agents, Einstein SDR Agent and Einstein Sales Coach Agent, which autonomously engage leads and provide personalised coaching. This move aligns with Salesforce’s strategy to integrate AI into its Einstein 1 Agentforce Platform, enabling companies like Accenture to scale deal management and focus on complex sales.

Salesforce integrates AI across all key offerings through its unified Einstein 1 Platform, which enhances data privacy, security, and operational efficiency via the Einstein Trust Layer. 

“We have generative AI capabilities in sales cloud, service cloud, marketing cloud, commerce cloud, as well as our data cloud product, making it a comprehensive solution for enterprise needs,” said Sridhar H, senior director of solution engineering at Salesforce.

SAP’s generative AI strategy, on the other hand, centres around integrating AI into core business processes through strategic partnerships, ethical AI principles, and enhancing its Business Technology Platform (BTP) to drive relevance, reliability, and responsible AI use across industries.

“We are adding a generative AI layer to our Business Technology Platform to address data protection concerns and enhance data security,” stated Sindhu Gangadharan, senior VP and MD of SAP Labs, underscoring the company’s focus on integrating AI with a strong emphasis on security and business process improvement.

Oracle, on the other hand, focuses on leveraging its second-generation cloud infrastructure, Oracle Cloud Infrastructure (OCI). It is designed with a unique, non-blocking network architecture to support AI workloads with enhanced data privacy while extending its data capabilities across multiple cloud providers.

“We’re helping customers do training inference and RAG in isolation and privacy so that you can now bring corporate sensitive, private data…without impacting any privacy issue,” said Christopher G Chelliah, senior vice president, technology & customer strategy, JAPAC at Oracle.

Meanwhile, IBM has watsonx.ai, an AI and data platform designed to help companies integrate, train, and deploy AI models across various business applications.

IBM’s generative AI strategy with watsonx.ai differentiates itself by offering extensive model flexibility, including IBM-developed (Granite), open-source (Llama 3 and alike), and third-party models, along with robust client protection and hybrid multi-cloud deployment options. At the same time, Pega focuses on deeply integrating AI within its platform to streamline business processes and eliminate data silos through its unique situational layer cake architecture.

Pega told AIM that it distinguishes itself from its competitors by avoiding the limitations of the traditional technological approaches, which often lead to redundant implementations and data silos. “In contrast, competitors might also focus more on channel-specific designs or product-centric implementations, which can lead to inefficiencies and fragmented data views across systems,” said Visweswaraiah. 

Situational Layer Cake Architecture 

Pega told AIM that its approach to integrating GenAI processes into business operations is distinct due to its focus on augmenting business logic and decision engines rather than generating code for development. 

It employs the situational layer cake architecture, which as a part of Pega’s exclusive centre-out architecture, helps to adapt microjourneys for different customer types, lines of business, geographies, and more. 

“Our patented situational layer cake architecture works in layers making specialising a cinch, differentiating doable, and applying robust applications to any situation at any time, at any scale,” said  Visweswaraiah.

He added that enterprises can start with small, quick projects that can grow and expand over time, ensuring they are adaptable and ready for future challenges.

In addition to this, the team said it has the ‘Pega Infinity’ platform, which can mirror any organisation’s business by capturing the critical business dimensions within its patented situational layer cake. 

“Everything we build in the Pega platform, processes, rules, data models, and UI is organised into layers within the situational layer cake. This means that you can roll out new products, regions, or channels without copying or rewriting your application,” shared  Visweswaraiah. 

He further said that the situational layer cake lets you declare what is different and only what is different into layers that match each dimension of your business. 

Simply put, when a user executes the application, the Pega platform slices through the situational layer cake and automatically assembles an experience that is tailored exactly to that user’s context. 

Visweswaraiah believes that this architecture has given them a great opportunity to integrate GenAI into the platform at the right layers so it is available across the platform. 

]]>
The Operationalisation of GenAI https://analyticsindiamag.com/ai-highlights/the-operationalisation-of-genai/ Mon, 02 Sep 2024 08:28:25 +0000 https://analyticsindiamag.com/?p=10134232 Organisations are now pledging substantial investments toward GenAI, indicating a shift from conservative avoidance to careful consideration.]]>

The operationalisation of GenAI is becoming significant across various industries. Vinoj Radhakrishnan and Smriti Sharma, Principal Consultants, Financial Services at Fractal, shared insights into this transformative journey, shedding light on how GenAI is being integrated into organisational frameworks, particularly in the banking sector, addressing scepticism and challenges along the way.

“GenAI has undergone an expedited evolution in the last 2 years. Organisations are moving towards reaping the benefits that GenAI can bring to their eco system, including in the banking sector. Initial scepticism surrounding confidentiality and privacy has diminished with more available information on these aspects,” said Sharma. 

She noted that many organisations are now pledging substantial investments toward GenAI, indicating a shift from conservative avoidance to careful consideration.

Radhakrishnan added, “Organisations are now more open to exploring various use cases within their internal structures, especially those in the internal operations space that are not customer facing. 

“This internal focus allows for exploring GenAI’s potential without the regulatory scrutiny and reputational risk that customer facing applications might invite. Key areas like conversational BI, knowledge management, and KYC ops are seeing substantial investment and interest.”

Challenges in Operationalisation

Operationalising GenAI involves scaling applications, which introduces complexities. “When we talk about scaling, it’s not about two or three POC use cases; it’s about numerous use cases to be set up at scale with all data pipelines in place,” Sharma explained. 

“Ensuring performance, accuracy, and reliability at scale remains a challenge. Organisations are still figuring out the best frameworks to implement these solutions effectively,” she said.

Radhakrishnan emphasised the importance of backend development, data ingestion processes, and user feedback mechanisms. 

“Operationalising GenAI at scale requires robust backend to frontend API links and contextualised responses. Moreover, adoption rates play a crucial role. If only a fraction of employees uses the new system and provide feedback, the initiative can be deemed a failure,” he said.

The Shift in Industry Perspective

The industry has seen a paradigm shift from questioning the need for GenAI to actively showing intent for agile implementations. “However,” Sharma pointed out, “only a small percentage of organisations, especially in banking, have a proper framework to measure the impact of GenAI. Defining KPIs and assessing the success of GenAI implementations remain critical yet challenging tasks.”

The landscape is evolving rapidly. From data storage to LLM updates, continuous improvements are necessary. Traditional models had a certain refresh frequency, but GenAI requires a more dynamic approach due to the ever-changing environment.

Addressing employee adoption, Radhakrishnan stated, “The fear that AI will take away jobs is largely behind us. Most organisations view GenAI as an enabler rather than a replacement. The design and engineering principles we adopt should focus on seamless integration into employees’ workflows.”

Sharma illustrates with an example, “We are encouraged to use tools like Microsoft Copilot, but the adoption depends on how seamlessly these tools integrate into our daily tasks. Employees who find them cumbersome are less likely to use them, regardless of their potential benefits.”

Data Privacy and Security

Data privacy and security are paramount in GenAI implementations, especially in sensitive sectors like banking. 

Radhakrishnan explained, “Most GenAI use cases in banks are not customer-facing, minimising the risk of exposing confidential data. However, there are stringent guardrails and updated algorithms for use cases involving sensitive information to ensure data protection.”

Radhakrishnan explained that cloud providers like Microsoft and AWS offer robust security measures. For on-premises implementations, organisations need to establish specific rules to compartmentalise data access. 

“Proprietary data also requires special handling, often involving masking or encryption before it leaves the organisation’s environment,” Sharma added.

Best Practices for Performance Monitoring

Maintaining the performance of GenAI solutions involves continuous integration and continuous deployment (CI/CD). 

“LLMOps frameworks are being developed to automate these processes,” Radhakrishnan noted. “Ensuring consistent performance and accuracy, especially in handling unstructured data, is crucial. Defining a ‘golden dataset’ for accuracy measurement, though complex, is essential.”

Sharma added that the framework for monitoring and measuring GenAI performance is still developing. Accuracy involves addressing hallucinations and ensuring data quality. Proper data management is fundamental to achieving reliable outputs.

CI/CD play a critical role in the operationalisation of GenAI solutions. “The CI/CD framework ensures that as underlying algorithms and data evolve, the models and frameworks are continuously improved and deployed,” Radhakrishnan explained. “This is vital for maintaining scalable and efficient applications.”

CI/CD frameworks help monitor performance and address anomalies promptly. As GenAI applications scale, these frameworks become increasingly important for maintaining accuracy and cost-efficiency.

Measuring ROI is Not So Easy

Measuring the ROI of GenAI implementations is complex. “ROI in GenAI is not immediately apparent,” Sharma stated. “It’s a long-term investment, similar to moving data to the cloud. The benefits, such as significant time savings and reduction in fines due to accurate information dissemination, manifest over time.”

Radhakrishnan said, “Assigning a monetary value to saved person-hours or reduced fines can provide a tangible measure of ROI. However, the true value lies in the enhanced efficiency and accuracy that GenAI brings to organisational processes.”

“We know the small wins—saving half a day here, improving efficiency there—but quantifying these benefits across organisations is challenging. At present, only a small portion of banks have even started the journey on a roadmap for that,” added Sharma.

Sharma explained that investment in GenAI is booming, but there is a paradox. “If you go to any quarterly earnings call, everybody will say we are investing X number of dollars in GenAI. Very good. But on the ground, everything is a POC (proof of concept), and everything seems successful at POC stage. The real challenge comes after that, when a successful POC needs to be deployed at production level. There are very few organisations scaling from POC to production as of now. One of the key reasons for that is uneasiness on the returns from such an exercise – taking us back to the point on ROI.”

“Operational scaling is critical,” Radhakrishnan noted. “Normally, when you do a POC, you have a good sample, and you test the solution’s value. But when it comes to operational scaling, many aspects come into play. It must be faster, more accurate, and cost-effective.” 

Deploying and scaling the solution shouldn’t involve enormous investments. The solution must be resilient, with the right infrastructure. When organisations move from POC to scalable solutions, they often face trade-offs in terms of speed, cost, and continuous maintenance.

The Human Element

Human judgement and GenAI must work in harmony. “There must be synergy between the human and what GenAI suggests. For example, in an investment scenario, despite accurate responses from GenAI, the human in the loop might disagree based on their gut feeling or client knowledge,” said Radhakrishnan.

This additional angle is valuable and needs to be incorporated into the GenAI algorithm’s context. A clash between human judgement and algorithmic suggestions can lead to breakdowns, especially in banking, where a single mistake can result in hefty fines.

Data accuracy is obviously crucial, especially for banks that rely heavily on on-premises solutions to secure customer data. 

“Data accuracy is paramount, and most banks are still on-premises to secure customer data. This creates resistance to moving to the cloud. However, open-source LLMs can be fine-tuned for on-premises use, although initial investments are higher,” added Sharma.

The trade-off is between accuracy and contextualisation. Fine-tuning open-source models is often better than relying solely on larger, generic models.

Radhakrishnan and Sharma both noted that the future of GenAI in banking is moving towards a multi-LLM setup and small language models. “We are moving towards a multi-LLM setup where no one wants to depend on a single LLM for cost-effectiveness and accuracy,” said Sharma. 

Another trend she predicted is the development of small language models specific to domains like banking, which handle nuances and jargon better than generalised models.

Moreover, increased regulatory scrutiny is on the horizon. “There’s going to be a lot more regulatory scrutiny, if not outright regulation, on GenAI,” predicted Radhakrishnan.

“Additionally, organisations currently implementing GenAI will soon need to start showing returns. There’s no clear KPI to measure GenAI’s impact yet, but this will become crucial,” he added.

“All the cool stuff, especially in AI, will only remain cool if the data is sound. The more GenAI gathers steam, the more data tends to lose attention,” said Sharma, adding that data is the foundation, and without fixing it, no benefits from GenAI can be realised. “Banks, with their fragmented data, need to consolidate and reign-in this space to reap any benefits from GenAI,” she concluded.

]]>
Can Gen AI Reduce the Technical Debt of Supply Chain Platforms https://analyticsindiamag.com/ai-highlights/can-gen-ai-reduce-the-technical-debt-of-supply-chain-platforms/ Fri, 23 Aug 2024 13:03:46 +0000 https://analyticsindiamag.com/?p=10133651 Madhumita Banerjee, sheds light on how technical debt accumulates for Enterprises, and how generative AI can play a pivotal role in addressing these challenges.]]>

Technical debt, like financial debt, is a concept in information technology where shortcuts, quick fixes or immature solutions used to meet immediate needs burden enterprises with future costs. This debt can significantly impact supply chain efficiency, especially as businesses face the pressures of staying competitive and agile in a post-pandemic world. 

Madhumita Banerjee, Associate manager, Supply chain and manufacturing at Tredence, sheds light on how technical debt accumulates for Enterprises, and how generative AI can play a pivotal role in addressing these challenges.

Banerjee explained that in the context of supply chains, technical debt accumulates when outdated systems, fragmented processes, and manual, siloed workflows are used. Over time, these inefficiencies lead to increased operational costs, reduced responsiveness, and heightened exposure to risks, making it harder for companies to remain competitive.

One of the primary contributors to technical debt, according to Banerjee, is the reliance on legacy systems. “Many supply chains rely on outdated systems that are difficult to integrate with modern technologies, leading to increased maintenance costs and inefficiencies,” she noted. 

These legacy systems, coupled with data silos where information is stored in disparate systems, create significant barriers to seamless information flow, which is critical for supply chain efficiency.

Manual processes also play a role in accumulating technical debt. Tasks requiring human intervention are prone to errors and delays, contributing to inefficiencies and higher operational costs. 

As companies transitioned to digitalization, the rushed adoption of custom solutions and cloud migrations—often driven by the need to keep pace with technological advancements—introduced added complexity and heightened system maintenance burdens. Generative AI emerges as a pivotal new factor in this scenario. Although early adopters face new risks and the possibility of future debt with each generative AI deployment, the technology shows significant promise in addressing these challenges

The Role of Generative AI in Addressing Technical Debt

Banerjee emphasised that while analytics has historically helped connect data and enhance visibility, the emergence of generative AI, especially LLMs, marked a significant shift. 

“Conversational AI and LLM-powered agents make it easier for functional partners—both technical and non-technical—to understand and act on complex data,” she explained. This is especially crucial in supply chains, where not all stakeholders, such as warehouse partners and freight workers, are tech-savvy.

One of the most significant advantages of generative AI in supply chain management is its ability to enhance data integration and visibility. For instance, in order processing, which traditionally involves many manual steps prone to errors, generative AI can automate the entire workflow—from order intake and validation to order confirmation—ensuring seamless communication across departments and reducing the need for manual intervention.

Generative AI also holds promise in optimising decision-making processes within supply chain platforms. However, Banerjee noted that the effectiveness of generative AI in this area depends on the maturity of the supply chain itself. 

“For instance, if we have an LLM-powered event listener that detects market sentiments and links this information to the forecast engine, it can significantly narrow down the information demand planners need,” she said. 

This level of optimisation requires a robust and connected data model where all data parts communicate effectively, enabling real-time insights and more accurate demand forecasts.

Predictive Analytics, Real-time Data Processing, and Compliance

Banerjee said that predictive analytics is another area where generative AI can revolutionise supply chain management. She recalled the evolution from traditional “what-if” analyses to more advanced machine learning algorithms that predict outcomes over time. 

However, she pointed out that decision-making has now evolved to require not just predictions but also a deeper understanding of cause-and-effect relationships. “With GenAI, we can weave in causal discovery algorithms that translate complex data into actionable insights presented in simple English for all stakeholders to understand,” she added.

This capability is particularly valuable in areas like inventory forecasting, where understanding the root causes of forecast errors and deviations can lead to more accurate and reliable predictions. By translating these insights into easily digestible information, generative AI empowers supply chain managers to make more informed decisions, ultimately improving efficiency and reducing costs.

Speaking about real-time data processing being critical for the effectiveness of generative AI, Banerjee clarified that it’s not the AI that contributes to real-time processing but the other way around. 

“We need to have real-time data to make sure we can analyse scenarios and use generative AI to its maximum potential,” she explained. For instance, ensuring that data entered into legacy systems is immediately available on the cloud allows LLMs to process and convert this data into actionable insights without delay.

In terms of compliance and risk management, generative AI can bolster efforts by removing manual interventions. Banerjee highlighted procurement and transportation as a key area where GenAI can enhance compliance. In transportation, where contracts are reviewed annually, GenAI-powered systems can query specific contracts, compare terms, and ensure adherence to key metrics like freight utilisation and carrier compliance.

Challenges and Future Outlook

Although generative AI offers numerous benefits, challenges still persist. Banerjee stressed the importance of properly vetting the fitment and maturity of Gen AI strategy. “Embarking on the GenAI journey may appear simple, but without a thorough assessment of need and fitment, along with strong investments in data quality, integration, and governance, companies are likely to deepen their technical debt.”, she added

One of the most significant concerns is the issue of “hallucination”, where AI models generate incorrect or misleading information. and validating the data on which AI models are trained to avoid garbage in-garbage out scenarios.

In summary, Banerjee ties the discussion back to the central theme of technical debt. By addressing the key contributors to technical debt—legacy systems, data silos, and manual processes—generative AI can help reduce future costs and risks, enabling companies to pursue digital initiatives with greater confidence. 

“If we can successfully integrate GenAI into our systems, we can revolutionise the entire supply chain platform, making it more efficient, responsive, and competitive”, she concluded.

]]>
Toss a Stone in Bangalore and It will Land on a Generative AI Leader https://analyticsindiamag.com/ai-features/toss-a-stone-in-bangalore-and-it-will-land-on-a-generative-ai-leader/ Fri, 23 Aug 2024 11:18:49 +0000 https://analyticsindiamag.com/?p=10133632 But then not everyone can be a good GenAI leader.]]>

Are software engineering roles disappearing? Not exactly. The apparent decline in job postings might just be a shift in titles—think ‘Generative AI Leader’ instead of ‘Software Engineer’. And now, as if on cue, everyone has become a generative AI expert and leader. Not just Silicon Valley, it’s the same in Bengaluru too.

100 Most Famous Leaders in AI

AIM 100: The Most Influential Leaders in AI 2024
As AI reshapes our world, AIM introducing the most influential global leaders in AI.

Vaibhav Kumar, senior director of AI/ML at AdaSci, pointed out this phenomenon in a LinkedIn post. “In Bangalore these days, if you randomly toss a stone, odds are it will land on a Generative AI Leader—earlier, they used to be software engineers, but now they seem to be on the brink of extinction.”

It is true that there are a slew of new jobs that are being created because of generative AI such as AI entrepreneur, chief AI officer, AI ethicist, AI consultant, and at least 50 more. But it has also given rise to people who simply call themselves ‘generative AI leaders’.

Everyone’s an AI Engineer

Kumar’s point is that now everyone is calling themselves an expert in AI as the barrier to entry is significantly lower. But then not everyone can be a good GenAI leader. Vishnu Ramesh, the founder of Subtl.ai, said that the only way to find a good generative AI leader is to ask them how many generative AI projects they have driven and whether these projects actually benefited the organisation. 

“The number of chatbots built will soon overtake Bangalore traffic,” Shashank Hegde, data science and ML manager at HP, said in jest, implying that with every company experimenting with generative AI in use cases, most of them are coming up with chatbots on their systems, which, honestly, people are not very fond of. 

Ramesh and Hegde’s points found takers. More engineers in the discussion described how their team’s ‘generative AI leaders’ were not able to not perform basic tasks of machine learning, and were mostly experts on data science, not generative AI really. “A Rs499 course from Udemy or Coursera is delivering GenAI leaders very rapidly,” commented Chetan Badhe.

AIM had earlier reported that fancy courses from new organisations are creating jobs for the future, but also causing freshers to search for jobs that are not there in the market. “GenAI leaders don’t know what’s bidirectional in BERT,” added another user.

Meanwhile, a recent IBM study found out that around 49% of Indian CEOs surveyed said they were hiring for GenAI roles that didn’t exist last year. Also, 58% of Indian CEO respondents say they are pushing their organisation to adopt generative AI more quickly than some people are comfortable with. 

This makes it clear that generative AI is the leading focus for companies, although for some, it’s more about appearances than substance. 

Moreover, getting ‘generative AI’ on your profile can also boost your salary by up to 50%. AIM Research noted that the median salaries of generative AI developers and engineers ranged between INR 11.1 lakh and 12.5 lakh per annum.

It just makes sense to call yourself a generative AI leader if you are already working in the software engineering field. But getting upskilled in the field to be credible is also important. 

Bengaluru is All About AI

Just like everyone is “thrilled” on LinkedIn, everyone is doing something in AI in Bengaluru. Someone correctly pointed out: If LinkedIn was a city, it would definitely be Bengaluru. The city’s AI hub, HSR Layout, was recently looking for a chief everything officer, someone who can perform a plethora of tasks all alone. 

And it is indeed true that most of the software engineering is becoming all about generative AI because of the trend and hype. Earlier, Bangalore was filled with software engineers from IT industries and startups; now they have slowly turned to generative AI leaders. Some are even influencers on X or LinkedIn. 

At the same time, Bengaluru’s AI culture is also giving rise to 10x engineers, who are able to do the task of 10 people using generative AI. Some even argue that there is no need for a computer science degree anymore to get into AI. It is definitely time to rewrite your resume and say you are a ‘generative AI leader’.

]]>
Tech Mahindra Partners with Google Cloud to Accelerate Generative AI Adoption https://analyticsindiamag.com/ai-news-updates/tech-mahindra-partners-with-google-cloud-to-accelerate-generative-ai-adoption/ Thu, 22 Aug 2024 08:53:34 +0000 https://analyticsindiamag.com/?p=10133495 Tech Mahindra, a global leader in technology consulting and digital solutions, has announced a strategic partnership with Google Cloud aimed at accelerating the adoption of generative AI and driving digital transformation across Mahindra & Mahindra (M&M) entities.  This collaboration seeks to leverage cutting-edge AI and ML to enhance various aspects of engineering, supply chain, pre-sales, […]]]>

Tech Mahindra, a global leader in technology consulting and digital solutions, has announced a strategic partnership with Google Cloud aimed at accelerating the adoption of generative AI and driving digital transformation across Mahindra & Mahindra (M&M) entities. 

This collaboration seeks to leverage cutting-edge AI and ML to enhance various aspects of engineering, supply chain, pre-sales, and after-sales services for M&M, one of India’s leading industrial enterprises.

As part of the partnership, Tech Mahindra will spearhead the cloud transformation and digitisation of M&M’s workspace, deploying the company’s data platform on Google Cloud. This effort is expected to revolutionise M&M’s operations by integrating advanced AI-powered applications into critical business areas. 

Notably, Google Cloud’s AI technologies will be utilised to detect anomalies during the manufacturing process, ensuring zero breakdowns, optimising energy efficiency, enhancing vehicle safety, and ultimately improving the overall customer experience.

Bikram Singh Bedi, vice president and country MD at Google Cloud, emphasised the importance of this collaboration, saying, “Google Cloud is committed to providing companies like M&M with our trusted, secure cloud infrastructure, and advanced AI tools. Our partnership with M&M will help enable a significant cloud and AI transformation for its enterprise and its global customers.”

The partnership will also see Tech Mahindra managing various enterprise applications and workloads for simulators, leveraging its expertise in analytics and cloud migration. This strategic move promises significant value to M&M’s global customer base, aligning with Tech Mahindra’s ongoing efforts to enhance productivity through gen AI tools.

Rucha Nanavati, Chief Information Officer at Mahindra Group, said, “Google Cloud is committed to providing companies like M&M with our trusted, secure cloud infrastructure, and advanced AI tools. Our partnership with M&M will help enable a significant cloud and AI transformation for its enterprise and its global customers.”

Tech Mahindra and Big Cloud Partnerships

Tech Mahindra has been continuously partnering with big tech cloud providers to leverage their generative AI applications on their platforms. Recently, the company partnered with Microsoft to use dedicated Copilot tools to transform their workplace. 

Similarly, the company has also partnered with Yellow.ai for enhancing HR and customer service automation solutions. 

]]>
Gen AI Startup Landscape in India 2024 https://analyticsindiamag.com/ai-highlights/gen-ai-startup-landscape-in-india-2024/ https://analyticsindiamag.com/ai-highlights/gen-ai-startup-landscape-in-india-2024/#respond Wed, 07 Aug 2024 15:29:28 +0000 https://analyticsindiamag.com/?p=10131736 The "Gen AI Startups in India 2024" report showcases the explosive growth of Gen AI startups, driven by venture capital and tech talent in hubs like Bengaluru and Delhi-NCR. It highlights key players like Krutrim and Sarvam AI, the surge in early-stage funding, and India’s rising influence in the global Gen AI landscape.]]>

Generative Artificial Intelligence (Gen AI) is revolutionizing content creation by enabling machines to generate text, images, music, and videos using machine learning models that analyze and replicate patterns from human-created data. This technology is enhancing customer interactions, automating repetitive tasks like RFP responses and multilingual marketing, and exploring unstructured data through conversational interfaces.

From 2021 to 2024, India has witnessed a significant increase in Gen AI startups, driven by venture capital investments, a skilled workforce, and the concentration of startups in tech hubs like Bengaluru, Mumbai, and Delhi-NCR. Other Indian cities like Hyderabad are emerging as a key player, contributing 7.2% of Gen AI startups, supported by its strong IT and pharmaceutical sectors. Chennai and Ahmedabad are also growing as tech hubs, with Chennai focusing on SaaS and Ahmedabad benefiting from institutions like IIM Ahmedabad.

Funding for Gen AI startups has surged, particularly in early stages, with seed funding accounting for 54% of the rounds. This financial backing is crucial for driving innovation and scaling operations. Notable startups like Krutrim and Sarvam AI are leading the sector, leveraging their funding to advance AI models and expand their market presence.

As of now, India hosts around 150 Gen AI startups, a growth fueled by increasing funding and government initiatives like the National AI Strategy and Startup India program. With continued innovation and expansion, India is set to become a global leader in generative AI.



Key Highlights:

  • The global Gen AI market is projected to grow from USD 39.2 billion in 2023 to USD 1082.4 billion by 2033. The market is anticipated to expand at a CAGR of 40.22% from 2024 to 2033.
  • There are over 150 Gen AI startups currently operating in India.
  • 51.4% of India’s Gen AI startups are based in Bengaluru, positioning the city as a leading hub for Gen AI startups in the country.
  • Gen AI startups such as Krutrim AI, Sarvam AI, EMA, Neysa Networks, Raga AI have all received substantial funding in 2024.
  • In 2024, Krutrim AI raised USD 50 million in funding.
  • 54.0% of AI startups obtained funding in their seed round.


Read the complete report here:

]]>
https://analyticsindiamag.com/ai-highlights/gen-ai-startup-landscape-in-india-2024/feed/ 0
Kuku FM is Using Generative AI to Make Everyone a Full-Stack Creative Producer https://analyticsindiamag.com/ai-features/kuku-fm-is-using-generative-ai-to-make-everyone-a-full-stack-creative-producer/ https://analyticsindiamag.com/ai-features/kuku-fm-is-using-generative-ai-to-make-everyone-a-full-stack-creative-producer/#respond Fri, 02 Aug 2024 06:30:00 +0000 https://analyticsindiamag.com/?p=10131210 "AI is going to be commoditised; everybody will have access to the tools. What will remain crucial is the talent pool you have – the storytellers."]]>

Kuku FM, a popular audio content platform backed by Google and Nandan Nilekani’s Fundamentum Partnership, is harnessing the power of generative AI to revolutionise how stories are created, produced, and consumed. This transformation is spearheaded by Kunj Sanghvi, the VP of content at Kuku FM, who told AIM that generative AI is part of their everyday work and content creation.

“On the generative AI side, we are working pretty much on every layer of the process involved,” Sanghvi explained. “Right from adapting stories in the Indian context, to writing the script and dialogues, we are trying out AI to do all of these. Now, in different languages, we are at different levels of success, but in English, our entire process has moved to AI.”

Kuku FM is leveraging AI not just for content creation but for voice production as well. The company uses Eleven Labs, ChatGPT APIs, and other available offerings to produce voices directly.

“Dramatic voice is a particularly specific and difficult challenge, and long-form voice is also a difficult challenge. These are two things that most platforms working in this space haven’t been able to solve,” Sanghvi noted. 

In terms of long-form content moving to generative AI, Kuku FM also does thumbnail generation, visual assets generation, and description generation and Sanghvi said that the team has custom GPTs for every process.

Compensating Artists

AI is playing a crucial role in ensuring high-quality outputs across various languages and formats. “In languages like Hindi and Tamil, the quality is decent, but for others like Telugu, Kannada, Malayalam, Bangla, and Marathi, the output quality is still poor,” said Sanghvi. 

However, the quality improves every week. “We put out a few episodes even in languages where we’re not happy with the quality to keep experimenting and improving,” Sanghvi added.

Beyond content creation, AI is helping Kuku FM in comprehensively generating and analysing metadata. “We have used AI to generate over 500 types of metadata on each of our content. AI itself identifies these attributes, and at an aggregate level, we can understand what makes certain content perform better than others,” he mentioned.

One of the most transformative aspects of Kuku FM’s use of AI is its impact on creators. The platform is in the process of empowering 5,000 creators to become full-stack creative producers. 

“As the generative AI tools become better, every individual is going to become a full-stack creator. They can make choices on the visuals, sounds, language, and copy, using AI as a co-pilot,” Sanghvi said. “We are training people to become creative producers who can own their content from start to end.”

When asked about the competitive landscape such as Amazon’s Audible or PocketFM, and future plans, Sanghvi emphasised that AI should not be viewed as a moat but as a platform. “Every company of our size, not just our immediate competition, will use AI as a great enabler. AI is going to be commoditised; everybody will have access to the tools. What will remain crucial is the talent pool you have – the storytellers,” he explained.

Everyone’s a Storyteller with AI

In a unique experiment blending generative AI tools, former OpenAI co-founder Andrej Karpathy used the Wall Street Journal’s front page to produce a music video on August 1, 2024. 

Karpathy copied the entire front page of the newspaper into Claude, which generated multiple scenes and provided visual descriptions for each. These descriptions were then fed into Ideogram AI, an image-generation tool, to create corresponding visuals. Next, the generated images were uploaded into RunwayML’s Gen 3 Alpha to make a 10-second video segment.

Sanghvi also touched upon the possibility of edge applications of AI, like generating audiobooks in one’s voice. “These are nice bells and whistles but are not scalable applications of AI. However, they can dial up engagement as fresh experiments,” he said.

Kuku FM is also venturing into new formats like video and comics, generated entirely through AI. He said that the team is not going for shoots or designing characters in studios. “Our in-house team works with AI to create unique content for video, tunes, and comics,” he revealed.

Sanghvi believes that Kuku FM is turning blockbuster storytelling into a science, making it more accessible and understandable. “The insights and structure of a story can now look like the structure of a product flow, thanks to AI,” Sanghvi remarked. 

“This democratises storytelling, making every individual a potential storyteller.” As Sanghvi aptly puts it, “The only job that will remain is that of a creative producer, finding fresh ways to engage audiences, as AI will always be biassed towards the past.”

]]>
https://analyticsindiamag.com/ai-features/kuku-fm-is-using-generative-ai-to-make-everyone-a-full-stack-creative-producer/feed/ 0
GenAI Is NOT a Bubble, It’s a Tree  https://analyticsindiamag.com/ai-features/genai-is-not-a-bubble-its-a-tree/ https://analyticsindiamag.com/ai-features/genai-is-not-a-bubble-its-a-tree/#respond Thu, 01 Aug 2024 05:57:52 +0000 https://analyticsindiamag.com/?p=10131093 And its branching...]]>

Many believe that the rush to adopt generative AI may soon lead to a bubble burst. OpenAI, creator of ChatGPT, faces high operating costs and insufficient revenue, potentially leading to losses of up to $5 billion in 2024 and risking bankruptcy within a year.

OpenAI is expected to spend nearly $4 billion this year on Microsoft’s servers and almost $3 billion on training its models. With its workforce of around 1,500 employees, expenses could reach up to $1.5 billion. In total, operational costs may hit $8.5 billion, while revenue stands at only $3.4 billion.

However, some believe otherwise. “As long as Sam Altman is CEO of OpenAI, OpenAI will never go bankrupt. He will continue to drop mind-blowing demos and feature previews, and raise billions. I am not being sarcastic, it’s the truth,” posted AI influencer Ashutosh Shrivastava on X. 

He added that with products like Sora, the Voice Engine, GPT-4’s voice feature, and now SearchGPT, anyone who thinks OpenAI will go bankrupt is simply underestimating Altman.

As OpenAI prepares to seek more funding in the future, it’s essential for Altman to create more bubbles of hype. Without this, the industry risks underestimating the full impact of generative AI. 

Chinese investor and serial entrepreneur Kai-Fu Lee is bullish about OpenAI becoming a trillion-dollar company in the next two to three years. “OpenAI will likely be a trillion-dollar company in the not-too-distant future,” said Lee recently. 

​​On the contrary, analysts and investors from major financial institutions like Goldman Sachs, Sequoia Capita, Moody’s, and Barclays have released reports expressing concerns about the profitability of the substantial investments in generative AI.

Sequoia Capital partner David Cahn’s recent blog, “600 Billion Question,” points out the gap between AI infrastructure spending and revenue. He suggests the industry needs to generate around $600 billion annually to cover investment costs and achieve profitability.

Early Signs of an AI Bubble? 

Microsoft shares fell 7% on Tuesday as the tech giant reported lower-than-expected revenue. Revenue from its Intelligent Cloud unit, which includes the Azure cloud-computing platform, rose 19% to $28.5 billion in the fourth quarter, missing analysts’ estimates of $28.68 billion.

Despite that, the company announced plans to spend more money this fiscal year to enhance its AI infrastructure, even as growth in its cloud business has slowed, suggesting that the AI payoff will take longer than expected. 

Microsoft CFO Amy Hood explained that the spending is essential to meet the demand for AI services, adding that the company is investing in assets that “will be monetised over 15 years and beyond.” CEO Satya Nadella also said that Azure AI now boasts over 60,000 customers, marking a nearly 60% increase year-on-year, with the average spending per customer also on the rise. 

Last week, Google’s cloud revenue exceeded $10 billion, surpassing estimates for Q2 2024. The company is, however, facing increasing AI infrastructure costs. Google CEO Sundar Pichai insists, “The risk of under-investing far outweighs the risk of over-investing for us.” He warned, “Not investing to stay ahead in AI carries much more significant risks.”

“If you take a look at our AI infrastructure and generative AI solutions for cloud across everything we do, be it compute on the AI side, the products we have through Vertex AI, Gemini for Workspace and Gemini for Google Cloud, etc, we definitely are seeing traction,” Pichai said, elaborating that the company now boasts over two million developers playing around with Gemini on Vertex and AI Studio. 

“AI is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do,” said Jim Covello, Goldman Sachs’ head of global equity research.

Really? 

Recently, Google DeepMind’s AlphaProof and AlphaGeometry 2 AI models worked together to tackle questions from the International Math Olympiad (IMO). The DeepMind team scored 28 out of 42 – enough for a silver medal but one point short of gold.

Meanwhile, the word on the street is that OpenAI is planning to start a healthcare division focused on developing new drugs using generative AI. Recently, the startup partnered with Moderna to develop mRNA medicines. The company is already working with Whoop, Healthify, and 10BedICU in healthcare. 

https://twitter.com/thekaransinghal/status/1815444769868574944

JPMorgan recently launched its own AI chatbot, LLM Suite, providing 50,000 employees (about 15% of its workforce) in its asset and wealth management division with a platform for writing, idea generation, and document summarisation. This rollout marks one of Wall Street’s largest LLM deployments.

“AI is real. We already have thousands of people working on it, including top scientists around the world like Manuela Veloso from Carnegie Mellon Machine Learning,” said JP Morgan chief Jamie Dimon, adding that AI is already a living, breathing entity.

“It’s going to change, there will be all types of different models, tools, and technologies. But for us, the way to think about it is in every single process—errors, trading, hedging, research, every app, every database—you’re going to be applying AI,” he predicted. “It might be as a copilot, or it might be to replace humans.”

Investor and Sun Microsystems founder Vinod Khosla is betting on generative AI and remains unfazed by the surrounding noise. “These are all fundamentally new platforms. In each of these, every new platform causes a massive explosion in applications,” Khosla said. 

Further, he acknowledged that the rush into AI might lead to a financial bubble where investors could lose money, but emphasised that this doesn’t mean the underlying technology won’t continue to grow and become more important.

Declining Costs

Dario Amodei, CEO of Anthropic, has predicted that training a single AI model, such as GPT-6, could cost $100 billion by 2027. In contrast, a trend is emerging towards developing small language models, more cost-efficient language models that are easier to run without requiring extensive infrastructure. 

OpenAI co-founder Andrej Karpathy recently said that the cost of building an LLM has come down drastically over the past five years due to improvements in compute hardware (H100 GPUs), software (CUDA, cuBLAS, cuDNN, FlashAttention) and data quality (e.g., the FineWeb-Edu dataset).

Abacus.AI chief Bindu Reddy predicted that in the next five years, smaller models will become more efficient, LLMs will continue to become cheaper to train, and LLM inference will become widespread. “We should expect to see several Sonnet 3.5 class models that are 100x smaller and cheaper in the next one to two years.”

The Bigger Picture 

Generative AI isn’t represented by a single bubble like the dot-com era but is manifested in multiple, industry-specific bubbles. For example, generative AI tools for video creation, such as SORA and Runway, demand much more computational power compared to customer care chatbots. Despite these variations, generative AI is undeniably a technology with lasting impact and is here to stay.

“I think people are using ‘bubble’ too lightly and without much thought, as they have become accustomed to how impressive ChatGPT or similar tools are and are no longer impressed. They are totally ignoring trillion-dollar companies emerging with countless new opportunities. Not everything that grows is a bubble, and we should stop calling AI a bubble or a trend. It is a new way of doing things, like the internet or smartphones,” posted a user on Reddit. 

“AI is more like…a tree. It took a long time to germinate, sprouted in 2016, became something worth planting in 2022, and is now digging its roots firmly in. Is the tree bubble over now? Heh. Just like a tree, AI’s impact and value will keep growing and evolving. It’s not a bubble; it’s more like an ecosystem,” said another user on Reddit. 

The Bubblegum effect: The issue today is that investors are using OpenAI and NVIDIA as benchmarks for the AI industry, which may not be sustainable in the long term. While NVIDIA has had significant success with its H100s and B200s, it cannot afford to become complacent. 

The company must continually innovate to reduce training costs and maintain its edge. This concern is evident in NVIDIA chief Jensen Huang’s anxiety about the company’s future.

“I am paranoid about going out of business. Every day I wake up in a sweat, thinking about how things could go wrong,” said Huang. 

He further explained that in the hardware industry, planning two years in advance is essential due to the time required for chip fabrication. “You need to have the architecture ready. A mistake in one generation of architecture could set you back by two years compared to your competitor,” he said.

NVIDIA’s success should not be taken for granted, even with the upcoming release of its latest GPU, Blackwell. Alternatives to NVIDIA are increasingly available, particularly for inference tasks, including Google TPUs and Groq. Recently, Groq demonstrated impressive inference speed with Llama 3.1, and Apple selected Google TPUs over NVIDIA GPUs for its model training needs.

Most recently, AI hardware company Etched.ai, unveiled its chip purpose-built just to run transformer models. Etched claims that Sohu can process over 500,000 tokens per second with Llama 70B. One 8xSohu server replaces 160 H100s. According to the company, “Sohu is more than ten times faster and cheaper than even NVIDIA’s next-generation Blackwell (B200) GPUs.”

Meta recently released Llama 3.1, which is currently competing with GPT-40. Meta chief Mark Zuckerberg is confident that Llama 3.1 will have a similar impact on the AI ecosystem as Linux had on the operating system world. Moreover, Meta also recently launched AI Studio, which allows creators to build and share customisable AI agents.

In contrast, “I hate the AI hype and, at the same time, I think AI is very interesting,” said Linus Torvalds, the creator of the Linux kernel, in a recent conversation with Verizon’s Dirk Hohndel. When asked if AI is going to replace programmers and creators, Torvalds asserted that he doesn’t want to be a part of the AI hype.

 He suggested that we should wait ten years before making broad announcements, such as claiming that jobs will be lost in the next five years. 

Bursting the Bubble 

With AI representing more than just a single bubble, some of these bubbles may burst. Gartner predicts that by the end of 2025, at least 30% of generative AI projects will be abandoned after the proof-of-concept stage due to factors such as poor data quality, inadequate risk control, escalating costs, and unclear business value.

Some start-ups that thrived during the initial AI boom are now encountering difficulties. Inflection AI, founded by ex-Google DeepMind veterans, secured $1.3 billion last year to expand their chatbot business. However, in March, the founders and some key employees moved to Microsoft. Other AI firms, like Stability AI, which developed a popular AI image generator, have faced layoffs. The industry also contends with lawsuits and regulatory challenges.

Meanwhile, Karpathy is confused as to why state of the art LLMs can both perform extremely impressive tasks (e.g. solve complex math problems) while simultaneously struggling with some very dumb problems such as incorrectly determining that 9.11 is larger than 9.9. He calls this “Jagged Intelligence.”

]]>
https://analyticsindiamag.com/ai-features/genai-is-not-a-bubble-its-a-tree/feed/ 0
Ascendion’s Generative AI Revenue Increases 382% in H1 24 https://analyticsindiamag.com/ai-news-updates/ascendion-genai-revenue-increases-382-y-o-y/ https://analyticsindiamag.com/ai-news-updates/ascendion-genai-revenue-increases-382-y-o-y/#respond Wed, 31 Jul 2024 12:07:24 +0000 https://analyticsindiamag.com/?p=10130896 Ascendion has trained more than 1,500 employees in GenAI tools so far]]>

Ascendion, a provider of digital engineering services, reported a 382% increase in generative AI revenue in the first half of 2024 compared to the same period in the previous year. The company has successfully completed over 50 generative AI programmes so far in 2024.

The company said it has trained more than 1,500 employees in GenAI tools, and 57% of the workforce is now Gen AI-literate.

“Our commitment to ‘Engineering to the power of AI’ has driven market-leading progress in the first half of 2024. From groundbreaking client impact to significant productivity gains, we’ve set the bar for the industry. GenAI is already driving a significant part of our growth and business, and as we move into the second half of the year, we’re doubling down on innovation and excellence, poised to deliver even greater value and transformation,” said Karthik Krishnamurthy, CEO of Ascendion.

Leveraging Ascendion’s AVA+ platform, a Fortune 50 bank achieved a 50% increase in productivity and a 40% reduction in effort for data extraction and validation processes. Similarly, a major manufacturing client leveraged generative AI for customer service optimisation, resulting in a 20% increase in operational efficiency and a 25% boost in system efficacy.

Earlier this year, Ascendion also unveiled a new AI Studio in Chennai to foster creativity, collaboration, and real-time problem-solving, enabling clients to witness the transformative power of Gen AI firsthand.

]]>
https://analyticsindiamag.com/ai-news-updates/ascendion-genai-revenue-increases-382-y-o-y/feed/ 0
Unlocking Opportunities with GenSQL: Leveraging Large Language Models for Structured Data https://analyticsindiamag.com/ai-trends/unlocking-opportunities-with-gensql/ https://analyticsindiamag.com/ai-trends/unlocking-opportunities-with-gensql/#respond Wed, 31 Jul 2024 09:41:07 +0000 https://analyticsindiamag.com/?p=10130823 In today’s data-driven world, the ability to efficiently query and manipulate structured data is paramount for organizations. GenSQL, a tool that harnesses the power of Large Language Models (LLMs), offers a revolutionary approach to interacting with structured data. This article explores the myriad opportunities that GenSQL presents, highlighting its capabilities with practical examples and referencing […]]]>

In today’s data-driven world, the ability to efficiently query and manipulate structured data is paramount for organizations. GenSQL, a tool that harnesses the power of Large Language Models (LLMs), offers a revolutionary approach to interacting with structured data. This article explores the myriad opportunities that GenSQL presents, highlighting its capabilities with practical examples and referencing research from the Massachusetts Institute of Technology1 (MIT).

GenSQL is a revolutionary technology that empowers Large Language Models (LLMs) to interact seamlessly with structured data through natural language. By translating human language queries into precise SQL statements, GenSQL unlocks a world of possibilities for data analysis and exploration.

Understanding GenSQL

At its core, GenSQL is a sophisticated system that interprets natural language queries and converts them into executable SQL code. This process eliminates the need for users to possess in-depth SQL knowledge, making data accessible to a broader audience. GenSQL leverages the power of LLMs to comprehend complex queries, handle ambiguities, and generate accurate SQL statements.

GenSQL is a technology that utilizes Large Language Models to generate SQL queries from natural language inputs. By leveraging the advanced natural language processing capabilities of LLMs, GenSQL enables users to interact with databases in a more intuitive and accessible manner. This bridges the gap between technical and non-technical users, facilitating better data utilization and decision-making.

Opportunities Offered by GenSQL

  1. Democratization of Data: GenSQL empowers users from various backgrounds to interact with data without requiring SQL expertise. This democratization of data fosters data-driven decision-making across organizations.
    Example: A marketing analyst can ask, “What is the sales trend for product A in the last quarter?” and GenSQL will generate the corresponding SQL query to retrieve the necessary data.
  1. Enhanced Data Exploration: Users can explore data intuitively through natural language, uncovering hidden patterns and insights.
    Example: A financial analyst can query, “Show me the top 5 customers by revenue,” and GenSQL will generate the SQL query to visualize the results.
  1. Complex Query Handling: GenSQL can handle intricate queries involving multiple tables, joins, aggregations, and filters.
    Example: A data scientist can ask, “Calculate the average order value for customers who made purchases in both Q1 and Q2, grouped by region,” and GenSQL will generate the appropriate SQL query.
  1. Natural Language Interfaces: GenSQL enables the creation of intuitive natural language interfaces for various applications, including chatbots, virtual assistants, and data visualization tools.
    Example: A customer support chatbot can answer questions like “When was my last order?” using GenSQL to retrieve the relevant data from the database.
  1. Augmented Intelligence: GenSQL can assist analysts in formulating complex queries by suggesting relevant terms and refining search criteria.
    Example: An analyst can ask, “Show me the correlation between customer age and purchase frequency,” and GenSQL can suggest additional variables like “product category” or “location” to enhance the analysis.
  1. Improved Data Governance: GenSQL can be integrated with data governance frameworks to ensure data security and compliance.
    Example: GenSQL can prevent unauthorized access to sensitive data by blocking queries that violate data privacy regulations.

Challenges and Considerations

While GenSQL offers significant advantages, it’s essential to consider the following challenges:

  • Data Quality: The accuracy of GenSQL’s generated SQL queries depends on the quality of the underlying data.
  • Ambiguity: Natural language can be ambiguous, leading to potential misinterpretations of user intent.
  • Performance: Complex queries might require optimization to ensure efficient query execution.

Insights from MIT Research

Research from MIT underscores the transformative potential of integrating LLMs with data querying tools like GenSQL. According to a study by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), the use of LLMs in generating SQL queries from natural language inputs can lead to significant improvements in data accessibility and usability. The study highlights that LLMs can understand context and nuances in natural language, making them particularly effective for translating complex queries into accurate SQL statements .

Moreover, MIT researchers have demonstrated that LLMs can learn from vast datasets to recognize patterns and predict the structure of SQL queries based on the context provided by the user. This predictive capability enhances the accuracy of generated queries and reduces the need for manual intervention, thereby increasing efficiency and reducing the likelihood of errors.

Conclusion

GenSQL represents a significant advancement in human-computer interaction with data. By democratizing data access,enhancing data exploration, and enabling complex query handling, GenSQL empowers users to extract maximum value from their data assets. As the technology continues to evolve, we can anticipate even more groundbreaking applications and benefits.

GenSQL represents a significant advancement in the way organizations interact with structured data. By harnessing the capabilities of Large Language Models, GenSQL democratizes data access, enhances productivity, ensures consistency, and accelerates decision-making processes. The natural language interface makes it accessible to a broader audience, fostering a more data-driven culture within organizations. As the technology continues to evolve, the opportunities it offers will only expand, driving further innovation and efficiency in data management and analysis.

By integrating GenSQL into their data workflows, businesses can unlock the full potential of their data assets, gaining valuable insights and maintaining a competitive edge in today’s dynamic marketplace.

  1. MIT Research ↩
]]>
https://analyticsindiamag.com/ai-trends/unlocking-opportunities-with-gensql/feed/ 0
Indian GenAI Startup Devnagri Secures Funding Led by Inflection Point Ventures https://analyticsindiamag.com/ai-news-updates/indian-genai-startup-devnagri-secures-funding-led-by-inflection-point-ventures/ https://analyticsindiamag.com/ai-news-updates/indian-genai-startup-devnagri-secures-funding-led-by-inflection-point-ventures/#respond Fri, 26 Jul 2024 03:44:48 +0000 https://analyticsindiamag.com/?p=10130265 The company was co-founded by Nakul Kundra and Himanshu Sharma, who collectively bring over 15 years of entrepreneurial experience.]]>

Devnagri, a generative AI company based in Noida specialising in personalising business communication for non-English speakers, has raised an undisclosed amount in a Pre-Series A round led by Inflection Point Ventures. 

The newly acquired funds will be used for marketing, sales, technology enhancement, R&D, infrastructure, and administrative expenses.

Devnagri leverages advanced NLP and Small Language Models (SLM) to tailor business communications for diverse linguistic audiences, seamlessly integrating their technology into both private and government infrastructures. This approach addresses the unique linguistic needs of non-English speakers, enhancing communication and engagement.

The company was co-founded by Nakul Kundra and Himanshu Sharma, who collectively bring over 15 years of entrepreneurial experience. Kundra, an MBA in Marketing & Finance, has a strong background in business strategy, while Sharma, also an MBA in Marketing and a skilled coder, combines technical expertise with business acumen.

Devnagri’s innovative solutions have earned them top BLEU scores in Indian languages, significantly impacting the Indian language ecosystem. By expanding from NLP products to generative and SLM, the company empowers customers to personalise their content, meeting the urgent need for localised communication. 

This enables businesses to scale operations efficiently in Tier II and Tier III cities, broadening their reach and engagement in cost-effective ways.

Kundra, co-founder of Devnagri, stated that communication is for the receiver. Hence, the Law of Attraction will only work when businesses communicate well with their audiences. The company is focused on creating hyper-local communication layers to enable businesses to communicate with their customers in their language.

“Taking a step forward, we are moving towards offering the enterprises and Government departments with a private cloud infrastructure, to maintain their ownership of their content and by keeping the LLMs/SLMs trained with every usage by the customer,” said Kundra.

Inflection Point Ventures has invested over INR 720 Cr across 200+ startups to date. Mitesh Shah, Co-Founder of Inflection Point Ventures, emphasised the challenges of translating India’s more than 700 languages and the importance of accuracy, context, and cultural nuances. “The platform ensures precise translations, context-awareness, and localisation, enabling seamless communication across diverse Indian languages,” said Shah.

Devnagri has received numerous prestigious awards, including the TieCon Award 2024 in San Francisco, the Graham Bell Award 2023, a feature in Shark Tank India 2022, and recognition as NASSCOM’s Emerging NLP Startup of India.

The opportunity market for Devnagri is projected to be valued at $100 billion globally by 2030, with $53 billion in India, growing at a CAGR of 6.7%. As the language industry takes shape in India, it will create sub-industries and transform communication for everyone.

]]>
https://analyticsindiamag.com/ai-news-updates/indian-genai-startup-devnagri-secures-funding-led-by-inflection-point-ventures/feed/ 0