OpenAI has deleted part of its terms and conditions which prohibited the use of its AI technology for military and warfare purposes.

An OpenAI spokesperson told Verdict that while the company’s policy does not allow its tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property, there are, however, national security use cases that align with its mission.

“For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on,” said the spokesperson adding: “It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.”

The ChatGPT maker’s usage policy initially included a ban on any activity that included “weapons development” and “military warfare”.

However, the new update that went live on 10 January, did not include the ban on “military and warfare”.

OpenAI left the blanket ban on using the service “to harm yourself or others” with an example included of using AI to “develop or use weapons”.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

“We’ve updated our usage policies to be more readable and added service-specific guidance,” OpenAI said in a blog post.

“We cannot predict all beneficial or abusive uses of our technology, so we proactively monitor for new abuse trends,” the blog post added.

Sarah Meyers, managing director of the AI Now Institute, told the Intercept that AI being used to target civilians in Gaza makes now a notable time for OpenAI to change their terms of service.

Fox Walker, analyst at research company GlobalData, told Verdict that the new guidelines “could very well lead to further proliferation of AI use in defence, security, and military contexts.”

“Whether it be the use of non-lethal technology, the development of military strategy, or simply the use of budgeting tools, there are many areas where AI can assist military leaders without causing harm to others or creating new weapons,” Walker said.

In October, OpenAI formed a new team to monitor, predict, and try to protect against “catastrophic risks” posed by AI such as nuclear threats and chemical weapons.

The team, named Preparedness, will also work to counter other dangers such as autonomous duplication and adaptation, cybersecurity, biological, and radioactive attacks, as well as targeted persuasion.

In 2022, OpenAI researchers co-authored a study which flagged the risks of using large language models for warfare.

An OpenAI spokesperson previously said: “We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity. But they also pose increasingly severe risks”

Research company GlobalData estimates the total AI market will be worth $383.3bn in 2030, implying a 21% compound annual growth rate between 2022 and 2030.