Google's recent announcement to abandon its previous commitment not to engage in AI development for military use, including weaponization and surveillance technologies, has sparked significant debate. Users express concern over the ethical implications of such technology being used in warfare and domestic policing, especially as Large Language Models (LLMs) could theoretically conduct extensive data analyses on individuals without human oversight. There's a recognition that as global tensions rise, particularly involving major powers like the U.S., Russia, and China, the demand for technological advancement in military applications has grown. Some comments suggest a shift in corporate priorities driven by political pressures, while others highlight the potential dangers of AI-driven drones and weapon systems. A major concern raised is the lack of ethical guidelines and the urgent need for the tech community to devise measures for monitoring and regulating the use of such weapons, echoing historical shifts in the narrative around nuclear weapons after World War II. Overall, the abandonment of Google's pledge raises critical ethical, moral, and safety questions related to AI's role in global conflict and surveillance.