The health ministry in the Hamas-run Gaza Strip says the war has killed upwards of 30,000 people, the majority of them civilians
According to the health ministry in Gaza, over 30,000 people have been killed in the war, with the majority being civilians. Israel claims that AI helps them target militants more accurately in their ongoing war against Hamas, which started in October. However, as civilian deaths increase, experts are questioning the effectiveness of these AI algorithms. Toby Walsh, a chief scientist, raises an important point, stating, “Either the AI is as good as claimed, and the Israeli military doesn’t care about collateral damage, or the AI is not as good as claimed.”
Israel has mentioned eliminating 10,000 terrorists since the conflict began and has highlighted the use of AI algorithms in targeting. Activists are already concerned about the use of AI-powered hardware like drones and gunsights in Gaza. The Israeli military introduced the use of AI-powered targeting after the conflict in May 2021, calling it the “first AI war.” They claim that AI systems help identify new targets daily, with the ability to identify over 12,000 targets in just 27 days.
However, there are doubts about the accuracy and ethics of these systems. Critics, including an anonymous former Israeli intelligence officer, describe the AI system, named Gospel, as creating a “mass assassination factory.” The specifics of the data used to identify targets and the criteria for selection are not disclosed. Experts suggest that the military likely feeds the system with drone footage, social media posts, ground-level information, mobile phone locations, and other surveillance data. Once a target is identified, the system could use population data to estimate potential civilian harm.
Lucy Suchman, a professor, cautions against assuming that more data always leads to better targeting. She explains that algorithms can be flawed, and more dubious data could result in worse outcomes.
While AI in targeting is not a new concept, some military analysts remain skeptical about its advanced capabilities. Analyst Noah Sylvia emphasizes the need for human cross-checking of AI outputs, even in one of the world’s most technologically advanced militaries like Israel’s.
In essence, the use of AI in military operations raises concerns about its effectiveness, ethics, and potential impact on civilian lives. As the conflict continues, the debate surrounding the role of AI in warfare and its consequences is likely to persist.
Leave a Reply