AI FOMO is real—IT leaders are racing to adopt artificial intelligence, worried they might miss the tech train if they don’t act fast. According to a new survey conducted by ABBYY, worldwide IT leaders are motivated to invest in artificial intelligence (AI) due to a fear of falling behind. With 63% expressing anxiety that their business would fall behind if they do not adopt AI, it is obvious that "fear of missing out" (FOMO) is a powerful motivator in today's tech ecosystem.
“It’s no surprise to me that organizations have more trust in small language models due to the tendency of LLMs to hallucinate and provide inaccurate and possibly harmful outcomes. We’re seeing more business leaders moving to SLMs to better address their specific business needs, enabling more trustworthy results,” said Maxime Vermeir, Senior Director of AI Strategy at ABBYY.
Given these concerns, it's not surprising that IT executives indicated that their organizations spent an average of $879,000 on AI in the previous year. This spending comes despite 33% of corporate leaders being concerned about the cost of implementing AI. Nearly all (96%) of respondents want to increase their AI investment in the coming year.
The pressure from customers
Customer pressure is another important driver of AI adoption, with 55% of corporate leaders reporting that it influences their AI usage decisions. Surprisingly, many IT directors are more concerned about internal AI misuse (35%), implementation costs (33%), AI mistakes or hallucinations (32%), or even compliance issues (29%).
Despite these worries, AI products are highly trusted by corporate leaders, with 84% expressing confidence in their usefulness. Smaller language models (SLMs) or purpose-built AI solutions are viewed as the most trustworthy by 90% of decision-makers. Over half (54%) have already used specific AI solutions like intelligent document processing (IDP). When asked about trust and ethical practices in AI usage, most (91%) of respondents felt confident that their organization complies with government requirements.
However, only 56% have their own internal AI trust rules, with 43% seeking help from consultants or non-profits. Half of the respondents (50%), said they would feel more confident if their organization had a formal responsible AI policy, and 48% said having software to monitor AI compliance would increase their trust.