OpenAI Lifts AI Model Ban on Military Use, Raises Concerns Across Society

2024-01-15

OpenAI, a well-known AI company in the United States, has recently made a move that has attracted attention: they have lifted a ban on the use of their AI model, ChatGPT, which is widely used worldwide. According to the Intercept, the previous terms of use explicitly prohibited the use of their model for purposes that could cause harm to individuals, such as "weapon manufacturing" and "military and warfare." However, after a major update on January 10th, these restrictions disappeared, leaving only a vague statement of "do not harm others," with weapons cited as an example. A spokesperson for OpenAI explained that this change was made to make the terms of use more concise and applicable, in line with the widespread use of the company's products and services worldwide. However, they did not explicitly address whether "do not harm others" includes all military applications, only stating that the terms prohibit the use of their technology for weapon manufacturing, harming or destroying others, or unauthorized destructive actions. Cybersecurity experts have expressed that this adjustment by OpenAI is a significant change, reflecting the company's quiet relaxation of restrictions on military applications. They point out that Microsoft, a major partner of OpenAI, is a primary supplier to the US military, and there is an increasing demand for AI technology in the military. Furthermore, they mention that while OpenAI's technology has not been directly used for killing, it has already been employed in military-related tasks such as coding, procurement, and analysis. They believe that the removal of the military and warfare ban in the terms of use by OpenAI is concerning, especially when AI systems are used to attack civilians in the Gaza Strip.