Recently, OpenAI, a leading figure in AI technology, has subtly revised its usage policies, sparking a wave of discussion and concern. This alteration, primarily concerning the application of its technology in military contexts, has raised questions about the company's direction and its implications for the future of AI in warfare.
The Change: A Closer Look
Until recently, OpenAI's usage policy explicitly forbade the utilization of its technology for "military and warfare" purposes. This clause was a clear demarcation, seemingly aligning the company's ethics with a stance against the militarization of AI.
However, as highlighted by The Intercept, a recent update to the company's policy page, dated January 10, has seen the removal of this specific language. The change, described in the changelog as an effort "to be clearer and provide more service-specific guidance," still prohibits the use of OpenAI's large language models (LLMs) for activities that could cause harm, including the development or usage of weapons.
Yet, the specific reference to "military and warfare" has been conspicuously omitted. Sarah Myers West, a managing director of the AI Now Institute, expressed her concerns, particularly in the context of AI's use in targeting civilians, as seen in Gaza.
“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” she said.This amendment could potentially open doors for OpenAI to engage with government agencies like the Department of Defense, which are known for their lucrative contracts.
“The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement”.
While OpenAI currently lacks a product that can directly inflict physical harm, the adaptability of its technology, including tasks like writing code and processing orders, could indirectly contribute to military operations.
OpenAI's Response and the Broader Implications
When questioned about this policy shift, OpenAI spokesperson Niko Felix said that the company's goal was to formulate universal principles that are "both easy to remember and apply." Felix emphasized the broad yet tangible principle of 'Don’t harm others,' noting its relevance across various contexts.
The company has specifically mentioned weapons and causing injury as clear examples of prohibited uses. However, Felix reportedly declined to clarify whether the term "harm" encompasses all military applications beyond weapon development.
“We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email to The Intercept.This ambiguity in policy wording leaves a significant gap in understanding the full scope of OpenAI's stance on military use.
“A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples”.
While it maintains a general prohibition against harmful activities, the removal of explicit references to military and warfare could be perceived as a subtle shift towards a more flexible, perhaps commercially driven approach.
This development is particularly pertinent as global military agencies are increasingly interested in integrating AI into their operations.
“OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications,” said Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits and an expert on machine learning and autonomous systems safety, citing a 2022 paper Opens in a new tabshe co-authored with OpenAI researchers that specifically flagged the risk of military use.The implications of this change are far-reaching. It's not just about what OpenAI's technology can do now, but what it might be capable of in the future. The technology's potential applications in military contexts, even in indirect roles, can have profound ethical and humanitarian implications.
“There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law,” she said.
“Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties”.
The debate over the militarization of AI is complex, encompassing not just the technology itself, but also broader questions of governance, regulation, and the ethical use of artificial intelligence.