Quantcast
Saturday, December 21, 2024

Pentagon Can Now Weaponize ChatGPT after Terms of Service Altered

'The U.S. government is able to experiment with a broader range of social controls...'

(Ken Silva, Headline USA) The artificial intelligence firm OpenAI has stealthily removed prohibitions on using its technology for military purposes—a move that may pave the way for the Pentagon to weaponize programs such as ChatGPT, the popular tool that mimics human conversation.

The Intercept reported on Friday that OpenAI had deleted two days earlier a ban on “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.”

An OpenAI spokesperson told The Intercept that the AI company wants to pursue certain “national security use cases that align with our mission.” The spokesperson reportedly cited a plan to create “cybersecurity tools” with DARPA, and that “the goal with our policy update is to provide clarity and the ability to have these discussions.”

The Intercept noted that none of OpenAI’s current public technologies could be used directly to kill someone. However, programs such as ChatGPT can be useful for intelligence analysis, logistics and numerous other purposes.

“I could imagine that the shift away from ‘military and warfare’ to ‘weapons’ leaves open a space for OpenAI to support operational infrastructures as long as the application doesn’t directly involve weapons development narrowly defined,” Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University, told The Intercept.

OpenAI tech could also conceivably be used to help make convincing “deepfakes”—videos, audio or images where a person is replaced with someone else’s likeness.

And indeed, a Pentagon procurement document revealed last March that the Defense Department is looking to obtain numerous new technologies, including ones to create deepfakes.

The ostensible purpose of deepfakes is so that U.S. Special Operations Command can engage in “influence operations, digital deception, communication disruption, and disinformation campaigns at the tactical edge and operational levels,” according to the procurement document.

While the SOCOM document purports to only use its psyops on foreign populations, history shows that military technology used overseas often comes back home to be deployed on domestic populations—from drone technology first experimented with in Vietnam to facial recognition software developed in Afghanistan. Economists Chris Coyne and Abigail Hall, authors of Tyranny Comes Home, termed this the “the boomerang effect.”

“The U.S. government is able to experiment with a broader range of social controls,” Coyne and Hall wrote. “Under certain conditions, these policies, tactics, and technologies are then re-imported to America, changing the national landscape and increasing the extent to which we live in a police state.”

Ken Silva is a staff writer at Headline USA. Follow him at twitter.com/jd_cashless.

Copyright 2024. No part of this site may be reproduced in whole or in part in any manner other than RSS without the permission of the copyright owner. Distribution via RSS is subject to our RSS Terms of Service and is strictly enforced. To inquire about licensing our content, use the contact form at https://headlineusa.com/advertising.
- Advertisement -

TRENDING NOW

TRENDING NOW