Samsung workers accidentally leaked confidential information while using an AI writer called ChatGPT to help them with their work. The AI writer was being used by engineers in Samsung’s semiconductor division to fix problems with their source code. However, they entered secret data, such as source code for a new program and notes from internal meetings, into the AI writer.
This resulted in three incidents of employees leaking sensitive information within a month. ChatGPT stores user input data to train itself, so this information is now in the hands of OpenAI, the company behind the AI service.
In response, Samsung Semiconductor is now creating its own in-house AI that employees can use, but with prompts limited to 1024 bytes in size. One employee used ChatGPT to optimize test sequences for identifying faults in chips, which is confidential. Another used it to convert meeting notes into a presentation, which was also confidential.
Samsung Electronics warned its workers about the risks of leaking confidential information, as it is impossible to retrieve now that it’s stored on OpenAI servers. In the semiconductor industry, competition is fierce, and any data leak could be disastrous for a company.
Samsung cannot request the retrieval or deletion of the information from OpenAI, which some argue makes ChatGPT non-compliant with the EU’s GDPR.
This is one of the core principles of the law that governs how companies collect and use data. Italy has even banned the use of ChatGPT nationwide for this reason.
Also read: No Code/Low Code App Development: Perspectives, Essentials, and Concerns in 2023