Apple has restricted the employee use of ChatGPT and other similar external artificial intelligence tools as it is reportedly developing its own language-generating tool. The company is fearing a potential confidential data leak that could happen if employees used it to help them with their work. With this move, Apple joins the likes of JP Morgan Chase and Verizon which have already imposed such a ban.
According to the report by The Wall Street Journal, Apple has also advised its employees to keep away from Microsoft-owned GitHub's Copilot for similar reasons, one of them being the ability to automate writing software code.
Apple has every right to be worried about ChatGPT. Around a month ago, Samsung suffered a data accident as a few of its employees used ChatGPT to check source code for errors and meeting summarization. As a consequence, Samsung announced the restriction of generative AI systems on company-owned devices and internal networks.
Even OpenAI itself confirmed that a bug in ChatGPT’s source code resulted in a breach of sensitive payment-related information like names, email and payment addresses. This data was revealed to premium ChatGPT users during a certain time frame on March 20. OpenAI has since solved the bug and ensured the ultimate safety and privacy of the AI tool users. In April, OpenAI launched an option that allows users to disable their chat history and as a result, those chats would not be used to improve the AI model.
While Apple has banned the use of ChatGPT for its employees, just a few days ago the company made the app available for download for its US customers on the iOS App Store.