As generative AI technologies and their applications like ChatGPT continue to rise in popularity, AI has become a hot phrase that every entrepreneur, corporate executive, and investor wants to be associated with.
As an advocate for the power of large language models (LLMs), I believe that generative AI has tremendous potential despite its imperfections. In this article, I will explain how to take full advantage of generative AI while avoiding its shortcomings.
Disillusionment: Generative AI Leveling the Playing Field for AI Solution Development
Just like any other new technologies, generative AI creates a "trough of disillusionment" especially in a business context. A common disillusionment is that any IT personnel or team can use technologies like GPT to build an AI solution for business gain. While generative AI might help accelerate the development of certain AI solutions (e.g., solving natural-language understanding [NLU] problems or writing code fragments), it still requires AI expertise and sophisticated engineering skills to build a robust and sustainable AI solution. This solution must meet all the usual software development requirements, such as user experience design, data management and security, and system performance and scaling.
Generative AI reminds me of the emergence of the microprocessor (CPU), which made the creation of personal computers (PCs) possible. While all computer engineers had access to microprocessors, not everyone became Steve Jobs, nor did every PC maker achieve the same success as Apple.
History has taught us that while new technological inventions could generate great business opportunities, the opportunities always favor those with deep expertise and passion in the related field. Jobs’ expertise and obsession with design and user experience gave him the advantage to capitalize on personal computing better than anyone else could.
Generative AI is no exception. Those with deep AI and IT expertise, extensive skills and experience building AI solutions will have the upper hand. They can quickly build optimal AI solutions that maximize the strengths of generative AI while avoiding its weaknesses.
So the challenge is how to identify the required AI and IT expertise and skills. Read on and see my three technical competencies required to enable faster and better value capture of generative AI.
Three Technical Ingredients for Successful Adoption of Generative AI
With the emergence of generative AI, I experienced and witnessed firsthand the challenges and rewards of deploying generative AI in practical business settings. Based on my real-world experience, I have identified three key areas of technical competence critical to successfully implementing generative AI solutions.
The Agent Framework
Any AI solution for business, no matter which form it exists in (e.g. a text-based chatbot, an embodied character, or even an interactive web page), is essentially an AI agent that is expected to perform certain business tasks to aid their human supervisors. Not only does an agent respond to user requests, but it should also act proactively on the end users' behalf. An effective agent framework forms the foundation of a generative AI solution, allowing it to communicate and interact with users and other systems meaningfully. This framework should encompass several components:
Perceptors and Actuators. These essential components are the AI system's eyes, ears, mouth, and hands, enabling it to perceive the environment and act upon it accordingly. For instance, perceptors might include natural language processing capabilities to process user input. At the same time, actuators could involve capabilities like text generation to respond to user input or communicating with other software systems to transfer and manage data. Generate AI excels in this aspect.
Control Unit. The main function of this component is to coordinate the perceptors and actuators to act in a sequence to eventually solve a complex problem. Like the human nervous system that connects and coordinates the functions of many biological parts, the capabilities of a control unit are critical to agent success. There are general open-source implementations of control units, such as LangChain. Still, most use cases require customized control units to manage the custom business logic coupled with the highly diverse and complex agent-user interaction workflows.
Working Memory. Like humans, capable AI agents need a working memory to maintain context during human-AI interactions. This component should be able to store and retrieve short-term information, allowing the agent to respond accurately and consistently to user inputs in context. To process and manage the vast amounts of high-dimensional data generated by AI, a vector database is required for building working memory. This database should be optimized for handling embeddings and other complex data structures, ensuring the AI system can efficiently access and utilize the information it needs.
Customizable Plugins. In an enterprise context, AI assistants often do not work alone. Instead, they must work with many other business systems including databases with corporate knowledge and policies, CRM systems that store and manage customer information, and financial systems that handle transactions. To maximize agent capabilities and optimize the AI return on investment (ROI), an AI agent framework must allow customizable plugins to easily integrate with third-party business systems.
AI is far from perfect and human intelligence is still the most effective way to train AI. For example, the success of ChatGPT is largely due to reinforcement learning from human feedback (RLHF) on top of the large amount of training data OpenAI has acquired, highlighting the importance of continuously incorporating human intelligence in AI development. To continuously improve and protect an AI investment, a no-code platform that ensures tight collaboration between AI and humans is a must-have and should include:
Human User Interface (UI). A user-friendly, no-code interface enables business users to supervise an AI agent and align the AI with the intended business goals with no programming expertise or additional IT support. Not only does this empower non-IT subject matter experts (SMEs) to inject their domain expertise into AI, but it also enables broader adoption of AI within the organization.
Test, Evaluation, and Ongoing Maintenance. Like building any software solution, AI requires comprehensive testing, evaluation, and regular maintenance. These ensure that the AI system continuously meets quality standards and business safety. Regular evaluations help identify areas for AI improvements and drive the ongoing development of the AI agent. Tools like dashboard visualization and maintenance UI should be provided to facilitate these processes. Since SMEs, instead of IT personnel, are most likely to maintain an AI agent (e.g. upgrading its knowledge and communication style), these tools must be made no-code as well.
Live Human Integration. In situations where the AI agent is unable to provide a satisfactory response or solution, seamless integration with live human agents is essential. This ensures that businesses can consistently deliver the desired outcomes and maintain high customer satisfaction.
Collaborative Environment for AI Co-Creation and Co-Management. Setting up and adopting an AI solution within an enterprise is a coordinated, collaborative effort, involving members with different expertise and from different functional areas (e.g. marketing and sales). Therefore, it is important to enable a no-code platform to support collaborative development and management of an AI agent.
A robust AI infrastructure is the backbone for building, deploying, and operating successful AI solutions. While there are several factors to consider when designing a robust AI infrastructure, I highlight three critical ones:
Private Model Building. An enterprise AI solution often requires fine-tuned models using proprietary data. However, organizations may not be able to upload their data to a public cloud due to regulations and/or competitions. A complete AI solution must support private model building with required security and compliance assurance.
Security and Compliance. Organizations who intend to run their own AI shop must prioritize safeguarding sensitive data and adhering to industry-specific regulations and upholding ethical AI practices. For those who intend to partner with a vendor, make sure that you evaluate the vendor's capabilities and commitments to AI safety and security.
DevOps and Scaling. A DevOps approach to AI solution deployment facilitates streamlined system maintenance, updates, and scaling. By closely integrating development and operations teams, an AI infrastructure can rapidly adapt to changing demands and ensure that it remains agile and responsive. For organizations who want to DIY AI solutions, consider hiring DevOps talents along with AI engineers. For organizations who wish to partner with a vendor, make sure to ask how it helps you maintain, update, and scale your AI solution.
As generative AI could help improve your workforce productivity and boost your bottom line, organizations must carefully evaluate whether they want to build their own generative AI solutions or partner with a vendor. No matter which way it is, organizations should consider developing the three critical technical competencies or require a vendor that has these competencies. These include an effective agent framework, a user-centered, no-code platform for human-AI-human collaboration, and robust AI infrastructure to support AI performance, security, and scalability.
As businesses navigate through the rapidly evolving world of generative AI, it is essential to understand the associated challenges and opportunities, working with the right technical talents to maximize ROI.
Dr. Huahai Yang is a co-founder and CTO of Juji, an AI company specializing in teaching machines with advanced human soft skills, such as people reading. Dr. Yang is an inventor of IBM Watson Personality Insights and a computer scientist and psychologist by training, who has extensive experience in building human-centered, enterprise AI solutions for real-world success.