49% of Participants Fooled by Counterfeit ChatGPT Apps, New Study Reveals

According to the latest survey by Beyond Identity, an alarming 49% of respondents got tricked by counterfeit ChatGPT apps.

The phishing-resistant MFA company has published the results of its recent research into the various tactics hackers use to infiltrate systems, steal sensitive data, and utilize generative AI technology to automate intricate processes.

The survey has shed light on the alarming effectiveness of ChatGPT-based scams and highlighted ways for individuals and businesses to safeguard against fraudulent messages, unsafe apps, and password breaches.

Participants were presented with various schemes and asked if they would be susceptible and, if not, to pinpoint the reasons for their skepticism. Interestingly, 39% admitted they could fall prey to at least one phishing message, 49% indicated they could be tricked into downloading a counterfeit ChatGPT app, and 13% confessed to using AI for password generation.

“With adversaries using AI, the level of difficulty for attackers will be markedly reduced. While writing well-crafted phishing emails is a first step, we fully expect hackers to use AI across all phases of the cybersecurity kill chain. Organizations building apps for their customers or protecting the internal systems used by their workforce and partners will need to take proactive, concrete measures to protect data—such as implementing passwordless, phish-resistant multi-factor authentication (MFA), modern Endpoint Detection and Response (EDR) software and zero trust principles,” said Jasson Casey, CTO of Beyond Identity.

As part of the survey, ChatGPT created phishing emails, text messages, and social media posts, and participants were tasked with distinguishing which ones seemed credible. Among the 39% who admitted vulnerability to at least one of the options, the most prevalent scams were found in social media posts (21%) and text messages (15%). For those who were cautious about all the messages, the most notable red flags included suspicious links, peculiar requests, and unusually high sums of money solicited.

While 93% of the survey respondents hadn't personally encountered information theft from an unsafe app in their real-life experiences, 49% were deceived when attempting to distinguish the genuine ChatGPT app from six authentic imitation alternatives. Intriguingly, individuals who had previously fallen prey to app fraud were notably more susceptible to doing so once more.

The survey also delved into how hackers can exploit ChatGPT for social engineering purposes. For instance, ChatGPT can utilize readily available personal information to generate lists of likely passwords for attempting account breaches. This poses a significant issue for the one in four respondents who incorporate personal details, such as birth dates (35%) or pet names (34%), into their passwords – information easily obtainable from social media, business profiles, and phone directories.