Samsung completely bans employees from using ChatGPT at work due to data breaches

Samsung previously experienced a sensitive data leak involving an employee using ChatGPT, leading to a directive prohibiting staff from sharing sensitive information with the AI.

However, Samsung has now expanded the scope of this ban, allowing employees to use ChatGPT on personal devices and outside of work hours, but strictly forbidding any mention of company information, particularly that which pertains to intellectual property.

The initial leak involved partial product source code, and it remains unclear what prompted this broader restriction. Samsung mandates that employees adhere to security guidelines, warning that failure to comply and resulting in company data leaks could lead to disciplinary action or even termination.

Additionally, Samsung is currently developing its own artificial intelligence software that includes a conversational robot, designed to assist employees in summarizing reports, writing software, or translating text. This software, however, remains in development and is not yet in use.

Samsung’s actions underscore the risks associated with AI, as it is neither the first nor the last company to experience data leaks due to ChatGPT. More businesses may follow suit, imposing similar restrictions on their employees’ use of ChatGPT.