Cohesity, a data security platform, has announced a significant expansion of Cohesity Gaia, its enterprise knowledge discovery assistant. This development introduces what is claimed to be one of the industry’s first AI-powered search capabilities for backup data stored on-premises.
This marks a major leap in the enterprise data management ecosystem. By leveraging NVIDIA’s accelerated computing and enterprise AI software, including NVIDIA NIM microservices and NVIDIA NeMo Retriever, Cohesity Gaia seamlessly integrates generative AI into backup and archival processes.
This enables enterprises to enhance efficiency, innovation, and overall growth potential through deeper data insights.
Pat Lee, vice president of strategic enterprise partnerships at NVIDIA, highlighted the benefits of this collaboration, and said, “Enterprises can now harness AI-driven insights directly within their 8 to preserve data accessibility and security while unlocking new levels of intelligence.”
This solution will be compatible with Cisco Unified Computing System (UCS), Hewlett Packard Enterprise (HPE), and Nutanix and offer various deployment options.
Moreover, customers like JSR Corporation, a Japanese research and manufacturing company, are also evaluating the benefits of this solution.
As enterprises adopt hybrid cloud strategies, many retain critical data on-premises to meet security, compliance, and performance requirements. By extending Gaia to these environments, organisations can adopt high-quality data insights while maintaining control over their infrastructure.
Sanjay Poonen, CEO and president of Cohesity, also emphasised the importance of on-premises AI solutions.
Cohesity Gaia now offers enterprises enhanced speed, accuracy, and efficiency in data search and discovery. Its multi-lingual indexing and querying capabilities allow global organisations to analyse data in multiple languages.
The infrastructure is scalable and customisable to meet business requirements, with a reference architecture designed for seamless deployment across hardware platforms.
Pre-packaged large language models (LLMs) on-premises ensure that backup data remains secure without cloud access. Its optimised architecture allows efficient searches across petabyte-scale datasets, making retrieval fast and reliable.