Why Businesses Shouldn’t Treat LLMs as Databases

Microsoft chief Satya Nadella recently said that traditional SaaS companies will collapse in the AI agent era. 
Illustration by Nalini Nirad

Despite the rise of AI, SaaS companies continue to play a crucial role, as large language models (LLMs) cannot function as databases. Sridhar Vembu, founder of Indian SaaS company Zoho, recently explained that neural networks “absorb” data in a way that makes it impossible to update, delete, or retrieve specific information accurately.

According to Vembu, this is not just a technological challenge but a fundamental mathematical and scientific limitation of the current AI approach.

He explained that if a business trains an LLM using its customer data, the model cannot update itself when a customer modifies or deletes their data. This is because there is no clear mapping between the original data and the trained parameters. Even if the model is dedicated to a single customer, there is no way to guarantee that their data changes will be reflected accurately.

Vembu compared the process of training LLMs to dissolving trillions of cubes of salt and sugar in a vast lake. “After the dissolution, we cannot know which of the cubes of sugar went where in the lake—every cube of sugar is everywhere!” 

Notably, Klarna CEO Sebastian Siemiatkowski recently shared on X that he experimented with replacing SaaS solutions like Salesforce by building in-house products with ChatGPT. 

His experience with LLMs was quite similar to Vembu’s. Siemiatkowski said that feeding an LLM the fragmented, dispersed, and unstructured world of corporate data would result in a very confused model.

He said that to address these challenges, Klarna explored graph databases (Neo4j) and concepts like ontology, vectors, and retrieval-augmented generation (RAG) to better model and structure knowledge.

Siemiatkowski explained that Klarna’s knowledge base, spanning documents, analytics, customer data, HR records, and supplier management, was fragmented across multiple SaaS tools such as Salesforce, customer relationship management (CRM), enterprise resource planning (ERP), and Kanban boards.

He noted that each of these SaaS solutions operated with its own logic, making it difficult to create a unified, navigable knowledge system. By consolidating its databases, Klarna significantly reduced its reliance on external SaaS providers, eliminating around 1,200 applications.

Microsoft chief Satya Nadella, in a recent podcast, indirectly took a dig at Salesforce by saying that traditional SaaS companies will collapse in the AI agent era. 

He noted that most business applications—such as Salesforce, SAP, and traditional ERP/CRM systems—function as structured databases with interfaces for users to input, retrieve, and modify data. He likened them to CRUD databases with embedded business logic.

Nadella explained that AI agents will not be tied to a single database or system but will operate across multiple repositories, dynamically pulling and updating information.

“Business logic is all going to these agents, and these agents are going to be multi-repo CRUD. They’re not going to discriminate between what the back end is; they’re going to update multiple databases, and all the logic will be in the AI tier,” he said.

RAG is a Stopgap Solution

Vembu argued that RAGs have their own limitations and cannot fully address the core problem of AI models being inherently static once trained. “In that sense, neural networks (and therefore LLMs) are not a suitable database.” 

“The RAG architecture keeps the business database separate and augments the user prompt with data fetched from the database,” he added.

In high-stakes applications, such as financial transactions, medical records, or regulatory compliance, this lack of adaptability could be a significant roadblock. 

“Vembu’s observations about LLMs’ static nature resonate strongly. The ‘frozen knowledge’ problem he describes isn’t just theoretical — it’s a practical challenge we grapple with daily in production environments,” Tagore Reddi, director of digital and data analytics at Hinduja Global Solutions, said. 

“While RAG architectures offer a workable interim solution, especially for sensitive enterprise data, they introduce their own complexity around data freshness, latency, and system architecture,” he added.  

However, many advancements are taking place today in RAG, especially in combination with vector search. Today, many database companies like Pinecone, Redis, and MongoDB offer vector search for RAG. 

Pinecone recently launched Assistant, an API service that simplifies building RAG-powered applications by handling chunking, embedding, vector search, and more. It allows developers to deploy production-grade AI applications in under 30 minutes.

Similarly, Oracle recently launched HeatWave GenAI, which integrates LLMs and vector processing within the database, allowing users to leverage generative AI without requiring AI expertise or data movement.

Meanwhile, Microsoft Azure offers Azure Cosmos DB, a fully managed NoSQL, relational, and vector database that integrates AI capabilities for tasks like RAG. Azure also provides Azure Cognitive Search, which uses AI for advanced search and data analysis.

Data warehousing platform Snowflake recently launched Cortex Agents, a fully managed service for integrating, retrieving, and processing structured and unstructured data at scale.

For now, LLMs cannot replace databases because they lack real-time updates and precise data control. Businesses still need reliable database solutions alongside AI.

📣 Want to advertise in AIM? Book here

Picture of Siddharth Jindal

Siddharth Jindal

Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Related Posts
Association of Data Scientists
GenAI Corporate Training Programs
Our Upcoming Conference
India's Biggest Conference on AI Startups
April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.