Exclusive Insight: OpenAI Clarifies Military Policy in Statement to Financial World


Exclusive Insight: OpenAI Clarifies Military Policy in Statement to Financial World
© Getty Images/Justin Sullivan

Recently, OpenAI, a leader in AI technology, subtly revised its usage policies, raising discussions and concerns about AI's role in military applications. Following initial reactions, OpenAI has provided further clarification, prompting an update to our analysis of these significant policy changes.

Overview of the Original Policy Change

Until January 10, OpenAI's usage policy explicitly forbade the use of its technology for "military and warfare" purposes. This stance was seen as a clear commitment against the militarization of AI.

However, as reported by The Intercept, this specific language was removed in a recent policy update, described as an attempt "to be clearer and provide more service-specific guidance." While still prohibiting the use of OpenAI's large language models (LLMs) for harmful activities, including weapon development, the omission of explicit references to "military and warfare" raised concerns.

Concerns Raised by Experts

Experts like Sarah Myers West of the AI Now Institute and Heidy Khlaaf from Trail of Bits expressed apprehensions. West pointed out the timing of the policy change, especially in the context of AI systems used in civilian targeting, as seen in Gaza.

Khlaaf, highlighting a 2022 paper she co-authored with OpenAI researchers, emphasized the risks of AI technology in military applications, noting the potential for bias, hallucination, and inaccuracy in LLMs, which could exacerbate harm in military operations.

OpenAI's Clarification

In an exclusive statement to Financial World, an OpenAI spokesperson clarified, "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property.

However, there are national security use cases that align with our mission. For example, we are already working with DARPA to create new cybersecurity tools to secure open-source software crucial for infrastructure and industry.

Our policy update aims to provide clarity and the ability to discuss these beneficial use cases, which might have been unclear under the previous 'military' policy." This clarification from OpenAI suggests a nuanced approach to military applications, focusing on beneficial national security use cases while maintaining prohibitions against harmful uses.

This stance addresses some initial concerns about the broad application of AI in military contexts, indicating a more targeted and responsible approach to national security collaborations.