Despite Widespread GenAI Adoption, Confidence in Security Measures Remains Low at 5%

As the use of Generative AI (GenAI) grows, a new report reveals a major issue - GenAI security is critically inadequate. According to Lakera's report, while 90% of organizations are actively employing or researching GenAI, just 5% of cybersecurity specialists are confident in the security safeguards that secure these applications.

The report has uncovered a critical gap in the rush to adopt GenAI technology: GenAI system security. As GenAI adoption grows, so does the possibility of quick attacks—specific approaches that enable even inexperienced users to manipulate AI apps, acquire unauthorized access, steal confidential data, and engage in unintended activities.

"With just a few well-crafted words, even a novice can manipulate AI systems, leading to unintended actions and data breaches. As businesses increasingly rely on GenAI to accelerate innovation and manage sensitive tasks, they unknowingly expose themselves to new vulnerabilities that traditional cybersecurity measures don't address. The combination of high adoption and low preparedness may not be that surprising in an emerging area, but the stakes have never been higher," said David Haber, co-founder and CEO at Lakera.

Concerns about the dependability and correctness of LLM (Large Language Model) are the most significant barrier to GenAI adoption, with 35% citing it as a major issue. Data privacy and security issues are close behind at 34%, with 28% citing a lack of competent workers as a significant challenge.

In addition, as many as 45% of those polled are considering GenAI use cases, while 42% are already actively adopting and implementing GenAI. Only 9% have no present plans to use LLMs. Only 22% of respondents used AI-specific threat modeling to mitigate GenAI-related risks.

Industry-wide disparities 

Furthermore, the study found considerable differences in AI security procedures across industries. While 58% of firms lack an AI security role, only 12% have specialist teams. This disparity is especially evident in industries such as education, where only 9% of firms have dedicated AI security teams. In contrast, the finance sector is better equipped, with 20% of businesses using specific AI security teams. This disparity highlights the various degrees of urgency and preparation among industries.