AFP: Pentagon adopts 'ethical principles' for artificial intelligence use25 February 2020 | 01:54 | FOCUS News Agency
"AI technology will change much about the battlefield of the future, but nothing will change America's steadfast commitment to responsible and lawful behavior," Secretary of Defense Mark Esper said in a statement.
The Pentagon, which regularly criticizes the use of facial recognition technology by police in China, has pledged to establish "explicit, well-defined uses" for AI technology, according to the statement.
Such technology, which learns through experience the skills necessary to complete its assigned tasks, will also be "reliable" and have transparent systems of use, the Pentagon statement said.
As a result, AI technology will be "governable": the military will have "the ability to disengage or deactivate deployed systems that demonstrate unintended behavior."
The question of AI weapons has been a controversial one in the Pentagon, where the basic principle was that human beings had to stay in the loop -- a formula that implies the machine itself cannot make the decision about whether to shoot at a target.
Such principles, which remained vague in the absence of AI-equipped weapons, were defined after 15 months of consultations with representatives of US technology giants, major universities and the administration. The discussions were led by Eric Schmidt, the former executive chairman of Google.
Under pressure from its employees, Google in 2018 declined to renew a Pentagon contract called Project Maven, which used machine learning to distinguish people and objects in drone videos.
© 2020 All rights reserved. Citing Focus Information Agency is mandatory!