Microsoft Banned Employees from Using ChatGPT, Allegedly for Security Reasons

Microsoft has also published the alleged temporary ban rule on its internal website, and it seems that it has also prevented corporate devices from accessing this chatbot

by Sededin Dedovic
Microsoft Banned Employees from Using ChatGPT, Allegedly for Security Reasons
© Ethan Miller / Getty Images

The AI landscape witnessed a significant event as ChatGPT, a popular AI chatbot developed by OpenAI, found itself in the midst of controversy due to alleged security vulnerabilities. This development took an unexpected turn when Microsoft, one of the primary investors in OpenAI, reportedly imposed a temporary ban on its employees from using ChatGPT, citing security concerns.

The ban, as per reports, was communicated through an internal notice on Microsoft's website, indicating that corporate devices were also restricted from accessing the chatbot. This decision raised eyebrows, especially considering Microsoft's substantial financial backing of OpenAI, amounting to a commitment of $10 billion earlier in the year, on top of a previous $3 billion investment.

The move appears to be rooted in broader data security apprehensions that have gripped companies globally. Microsoft, despite its significant investment in OpenAI and the integration of OpenAI's language model into its products, emphasized in an internal communication that ChatGPT remains an "external service of an independent company." Employees were urged to exercise caution, a directive extended to other external services, including the AI image generator Midjourney.

This surprising development occurred shortly after OpenAI's inaugural developer conference, where Microsoft representatives actively participated. The ban, however, seems to have been short-lived, with CNBC reporting that Microsoft swiftly restored access to ChatGPT after the news broke.

According to a company spokesperson, the ban was characterized as a mistake, attributing it to inadvertent disabling of endpoint control systems for large language models during testing. Microsoft sought to downplay concerns by reaffirming its encouragement for employees and customers to utilize services like Bing Chat and ChatGPT.

Despite this swift reversal, the incident underscores the delicate balance between embracing AI advancements and the need for rigorous data security measures. Even major investors in AI technologies such as Microsoft find themselves exercising caution, reminding users of the importance of approaching AI services, including ChatGPT, with a measured sense of vigilance regarding data security.

Microsoft Chatgpt