Branded Content

How to Successfully Deploy AI Assistants in Enterprises 

Successfully deploying AI assistants in enterprises requires a strategic approach—emphasising internal testing, user-centric design, pre-processing for accuracy, and a ruthless prioritisation of business needs.
How to Successfully Deploy AI Assistants in Enterprises

It is no secret that AI-enabled assistants and agents offer an unprecedented level of potential for transforming various enterprise sectors. Whether it is customer support, sales, marketing, revenue, or research, AI assistants have it covered. 

Sourav Banerjee, head of R&D and platform at MathCo, demonstrated how to successfully deploy AI agents in an enterprise at the Machine Learning Developers’ Summit (MLDS) 2025.

The company is currently developing GenAI-enabled conversational assistants that provide brand, marketing, and research insights, as well as assistants that help with data discoverability, customer support, and revenue management. 

Moreover, MathCo is also developing AI solutions that integrate into the workflow, such as a research summariser, contracts review assistant, and product trend analyser. These solutions are mostly focused on helping convert unstructured to structured data with further integration in the workflow. 

The company’s proprietary AI platform, NucliOS, is designed to help enterprises build end-to-end automated solutions through pre-built workflows and plug-and-play modules. It integrates generative AI features to enhance the platform’s ability to generate insights and summaries to improve users’ decision-making. 

Banerjee revealed a number of strategies for the successful implementation of these solutions. 

He said the company noticed a high success rate when these solutions were first internally tested. 

This is indeed a great way to ensure that any potential damage is contained internally while understanding its capabilities. 

Moreover, building solutions on top of paid API providers is also an efficient way to deploy solutions since it does not require much additional infrastructure. However, Banerjee revealed that once customers observe the benefits of the solutions after testing and production, most of them turn towards open-source solutions. 

Importance of ‘Buy, not Build’, Pre-Processing and Specific Contexts

Banerjee illustrated a detailed process of kickstarting an AI project. The process begins with identifying the right use case and then involving users early in the conceptualisation phase. Getting the users’ feedback early is critical for building an efficient solution. 

While it may seem tempting to reinvent the wheel and build a solution, Banerjee asserted that existing products and solutions may be available instead. This helps speed up the delivery of a proof of concept. 

Another important aspect is the user experience, given that, unlike traditional systems, outputs from AI are probabilistic and aren’t deterministic. MathCo deals with this problem by providing various features to control the output, like the ability to edit, update context, display sources, and provide multiple choices. Moreover, it is always important to show users the steps of the output when it is delayed. 

Banerjee delved into the importance of pre-processing, stressing that poor input data leads to poor AI outputs. For instance, pre-processing improves accuracies in text-to-SQL AI models and market research document analysis. 

He illustrated this process with an example of converting a presentation slide for a Q&A interaction. Initially, the slide is converted to JSON format. Next, text, charts, and images are extracted and separated. Finally, metadata is generated for faster retrieval, with context preserved by linking previous and next slides. This approach helps AI process the document more effectively.

Besides, AI also needs to understand enterprise-specific contexts to be as useful as possible. This involves data structures, business workflows, and user roles. For example, if the prompt asks for sales figures for “my brand”, the LLM doesn’t know what exactly it means. The organisation will have to categorise various types of context and pass it on to the model. 

Testing AI Solutions and ‘Prioritising Your Hierarchy of Needs Ruthlessly’

While the onus is on deploying and perfecting these assistants, testing is equally important. While using entire models as judges can be helpful, human-generated test data is far more reliable and preferred. LLMs as judges may also introduce another round of latency inside the workflow. Banerjee also observed the lack of nuance when LLMs were used to generate questions and answers. 

Moreover, multi-agent systems require testing for each agent and the final system. However, real-life accuracy can be quite different from test accuracy. Banerjee suggests measuring the drift by comparing embeddings between test questions and production questions. 

However, to accelerate the development of all of these models, reusable models are essential. For example, the usage of Tableau and PowerBI for front-end integrations, pre-built agents like text-to-SQL and document summarisation, and automated testing tools can cut down time and improve scalability.

Deploying these solutions requires a combination of AI, engineering, and product management skills across all phases. For example, AI engineers will need to define the problem and create prototypes. AI product managers, UX designers, and engineers will have to build a usable product for users. Software developers, data engineers, and testers will have to refine the system for production.

His last section probably held the most importance and began with a hard-hitting statement, as stated by a guide on Applied LLMs: “Prioritise your hierarchy of needs ruthlessly.” 

The system needs to be reliable, harmless, useful, scalable, cost-effective, and secure. While Banerjee did say that not all aspects can be solved at once, he suggested prioritising at least two or three factors. 

Share
Picture of Supreeth Koundinya
Supreeth Koundinya
Supreeth is an engineering graduate who is curious about the world of artificial intelligence and loves to write stories on how it is solving problems and shaping the future of humanity.
Related Posts
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.
discord icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.