
An artificial intelligence chatbot unveiled by Defense Secretary Pete Hegseth has generated significant discussion by categorizing airstrikes against suspected drug smugglers at sea as “unambiguously illegal.” This platform, powered by Google Gemini, has been integrated into the military’s operations as part of the Pentagon’s initiative to adopt advanced AI technologies.
In a video shared on social media, Hegseth emphasized the capabilities of the platform known as GenAI.mil, highlighting how it enables military personnel to access potent AI models for tasks such as conducting deep research, formatting documents, and analyzing video or imagery with exceptional speed. He asserted that this technology allows U.S. forces to enhance their operational efficiency and lethality, all sourced from American innovations.
The introduction of GenAI.mil has sparked interest and curiosity among military personnel and the public alike. An insider familiar with the tool, speaking anonymously to Straight Arrow News, mentioned the early testing of the chatbot’s responses by military members seeking clarity on certain operational scenarios.
In one notable instance, a user on Reddit posed a hypothetical situation that mimicked a controversial airstrike incident, describing a scenario in which a commander orders a missile strike on a drug-laden boat, leading to surviving individuals clinging to its wreckage. The prompt queried whether the order to strike the survivors would be contrary to Department of Defense policy.
In its response, GenAI.mil unequivocally recognized that the actions detailed in the hypothetical situation involved multiple violations of DoD regulations and the laws of armed conflict. It stated, “Yes, several of your hypothetical actions would be in clear violation of US DoD policy and the laws of armed conflict. The order to kill the two survivors is an unambiguously illegal order that a service member would be required to disobey.”
This clear delineation of ethical limitations underscores the chatbot’s role in reinforcing military ethics amidst operational decision-making.
Further validation of these claims was provided by the military source who tested the chatbot; they received a matching response that similarly categorized the actions as illegal under military law.
In a related context, Hegseth has faced scrutiny regarding his involvement in the controversial September 2 order related to the initial strike that left the two men dead, with assertions that he did not personally witness the subsequent order as it was executed. The blame has shifted to Adm. Frank “Mitch” Bradley instead.
Former President Donald Trump had earlier expressed intentions to release footage of the strike but has since retreated from that promise. The U.S. military’s exploration of AI ethics through tools like GenAI.mil not only highlights the technological advancements but also emphasizes the importance of adhering to legal and ethical standards in combat scenarios. The potential for AI in shaping the future of military operations remains a topic worthy of in-depth exploration and discussion.