{"id":13636,"date":"2026-02-01T00:49:50","date_gmt":"2026-02-01T01:49:50","guid":{"rendered":"https:\/\/globaltalenthq.com\/?p=13636"},"modified":"2026-02-05T06:00:54","modified_gmt":"2026-02-05T06:00:54","slug":"pentagon-wants-killer-ai-without-safeguards-reuters","status":"publish","type":"post","link":"https:\/\/globaltalenthq.com\/index.php\/2026\/02\/01\/pentagon-wants-killer-ai-without-safeguards-reuters\/","title":{"rendered":"Pentagon wants killer AI without safeguards \u2013 Reuters"},"content":{"rendered":"
The US Department of War has reportedly clashed with contractor Anthropic over the ethical limitations built into its tech<\/strong><\/p>\n <\/strong> The US Department of War is locked in a dispute with artificial intelligence developer Anthropic over restrictions that would limit how the military can deploy AI systems, including for autonomous weapons targeting and domestic surveillance.<\/p>\n The disagreement has stalled a contract worth up to $200 million, as military officials are pushing back against what they see as excessive limits imposed by the San Francisco-based company on the use of its technology, Reuters reported, citing six people familiar with the matter.<\/p>\n Anthropic has raised concerns that its AI tools could be used to carry out lethal operations without sufficient human oversight or to surveil Americans, sources told Reuters.<\/p>\n Pentagon officials, however, have argued that commercial AI systems should be deployable for military purposes regardless of a company’s internal usage policies, as long as they comply with US law.<\/p>\n \n Read more<\/strong><\/span><\/p>\n The dispute comes amid a broader push by the Trump administration to rapidly integrate artificial intelligence across the armed forces. Earlier this month, the Department of War outlined a new strategy aimed at transforming the US military into an “AI-first”<\/em> fighting force.<\/p>\n The Pentagon believes it must retain full control over how AI tools are employed on the battlefield and in intelligence operations, with US Defense Secretary Pete Hegseth vowing not to use models that “won’t allow you to fight wars.”<\/em><\/p>\n An Anthropic spokesperson said the company’s AI is “extensively used for national security missions by the US government”<\/em> and that it remains in “productive discussions with the Department of War about ways to continue that work.”<\/em> The Pentagon has yet to comment on the reported rift.<\/p>\n