Persistent Systems Introduces GenAI Hub for Seamless Enterprise AI Integration

Persistent Systems has unveiled the GenAI Hub—a cutting-edge platform designed to fast-track the development and deployment of Generative AI (GenAI) applications within enterprises. This platform integrates smoothly with existing organizational infrastructure, applications, and data, facilitating the swift creation of customized, industry-specific GenAI solutions. GenAI Hub supports various Large Language Models (LLMs) and cloud services, ensuring flexibility without vendor lock-in.

"At Persistent, we have always stayed ahead of the curve to capitalize on the latest industry technology trends, and now we're reaching new frontiers in GenAI to solve critical enterprise challenges and turbocharge client growth. With the Persistent GenAI Hub, clients can embrace a "GenAI-First" strategy, delivering AI-powered applications and services at scale. They can accelerate innovation while practicing responsible AI, leveraging pre-built accelerators and evaluation frameworks, and optimizing costs with a cross-LLM strategy. The GenAI Hub enables enterprises to streamline operations, enhance customer experiences, and identify new avenues for growth," said Praveen Bhadada, Global Business Head , Persistent.

To harness the full potential of GenAI and convert ideas into real business results, enterprises need to integrate GenAI seamlessly into their current systems. Given the broad spectrum of AI models—from general-purpose to highly specialized—clients need a robust platform like GenAI Hub. This platform fscilitates the development and management of multiple GenAI models, accelerating time-to-market with pre-built software components while adhering to responsible AI principles.

The GenAI Hub consists of five main components 

Playground: A no-code tool that enables domain experts to experiment with and apply GenAI using LLMs on enterprise data without needing programming skills. It provides a unified interface to LLMs from providers like Azure OpenAI, AWS Bedrock, and Google Gemini, as well as open models from Hugging Face like LLaMA2 and Mistral.

Agents Framework: This component offers a flexible architecture for developing GenAI applications, leveraging libraries such as LangChain and LlamaIndex to create innovative solutions like Retrieval Augmented Generation (RAG).

Evaluation Framework: Leveraging an 'AI to validate AI' methodology, this framework can auto-generate ground-truth questions for human verification. It includes metrics to track application performance and detect drift and bias, allowing for timely corrections.

Gateway: Acts as a router across different LLMs, ensuring compatibility of applications, and improving management of service priorities and load balancing. It also provides detailed insights into token usage and related costs.

Custom Model Pipelines: These pipelines support the creation and integration of customized LLMs and Small Language Models (SLMs) into the GenAI ecosystem, facilitating a streamlined process for data preparation and model fine-tuning for both cloud and on-premises deployments.

The GenAI Hub streamlines the development of enterprise use cases, providing step-by-step guidance and seamless integration of data with LLMs. This enables the rapid creation of efficient and secure GenAI solutions at scale, benefiting end users, customers, and employees alike.