On March 18, Israeli bombs rained down on Gaza, shattering a shaky peace that had lasted for two months—the longest ceasefire in the war since Hamas attacked Israel on Oct. 7, 2023.
On-the-ground reporting captured snippets of horrors unfolding in the aftermath of the bombing: a father trying to open his lifeless 2-year-old’s eyes, relatives pulling loved ones from the rubble and rushing to hospitals and cemeteries, a mother screaming over the bodies of her 13- and 15-year-old sons. “I did not allow them outside, but today I did. Why did they come out today?” she gasped between sobs.
Over 400 people lost their lives in the bombings, which Israel says targeted Hamas officials and are intended to force Hamas to release the remainder of the hostages taken in the Oct. 7 attack. If Hamas doesn’t comply, Israel’s defense minister said it plans to open “the gates of hell” in Gaza.
According to Hamas, five midlevel government leaders perished in the bombings.
These are but a fraction of the 50,000-plus who have passed away in Gaza since the war began. Most of the dead are women and children.
Thousands of them were marked for death by a computer.
Waging war with artificial intelligence (AI) has long been the stuff of science fiction, inspiring dystopian tomes like Terminator and The Matrix. It’s not fiction anymore. In March 2020, an autonomous drone attacked Libyan troops without permission from a human; it was the first autonomous attack by a machine in history. Popular Mechanics called it a “turning point,” adding “science fiction … crossed over into science fact.”
We are now living in the era of AI-guided war.
The war between Israel and Hamas is the first large-scale use of AI on the battlefield. The Israel Defense Force (IDF) is using AI and digital tools to select and track bombing targets.
The United States has outsized influence over AI development, including in the realm of defense. Much of the research and development is based here, our military budget is the highest by far, and we’re the most powerful nation in the world.
A question weighing on many minds is whether the U.S. should use that influence to ensure that AI is developed ethically—or if it should focus solely on growing the industry to encourage innovation; boost the economy; and stay ahead of China, our rival in the space. Many also wonder what happens if the world’s richest country allows unfettered development of AI weapons. Will others follow suit? What precisely will be created? How do you keep that tech from falling into the wrong hands?
The Trump administration has sent strong signals that its top priority is encouraging growth in the AI industry, not roping it in with regulation. Its laissez-faire philosophy to AI development no doubt thrills the self-described “broligarchy” who have the president’s ear. They largely view regulation, such as the European Union’s AI Act that went into effect last summer, as unnecessary and overly restrictive, holding that growth should be our top priority. But giving free market fanbros no guardrails in AI gives others pause, including people who generally support the technology and believe it has potential for accomplishing great things for humans and our planet.
“Just the fact that Silicon Valley is here is hugely instrumental, and it almost gives the U.S. the power to break from those sorts of cooperative agreements in a way that if we do the right thing with that, that would be great, and we could cut through red tape, for example,” Jacy Reese Anthis, founder of the Sentience Institute, recently told me.
“But if we handle it the wrong way, then it means we’ve taken that power that we happen to have and used it the wrong way.”
Targeted by AI
After Hamas’ Oct. 7 attack, in which 1,200 Israelis perished and hundreds were taken hostage, Israel retaliated with a vengeance. Over the next six weeks, its bombing campaign claimed the lives of 15,000 Palestinians.
Israel has bombed Gaza before, but never with such frequency and duration—nor with so many casualties so fast. Even among supporters of its war effort, some quietly wondered how it was choosing targets so quickly.
Last April, an investigation by +972 Magazine provided the answer: AI. Israel, it reported, is using AI and digital tools to identify, track, and target people believed to be Hamas militants. Subsequent reporting has confirmed the magazine’s findings.
The IDF reportedly uses a program called “Lavender” that rates people by the likelihood they belong to Palestinian armed groups; “Where’s Daddy?,” which determines where they may be located, often their family home; and “The Gospel,” a program that identifies buildings and other structures where Hamas and other militant groups are believed to operate.
An algorithm is essentially deciding who to target, when, and where.
Per +972, these programs are known to make mistakes. Lavender, for instance, has a purported 90% accuracy rate. Lavender isn’t fully automated; a person must approve the targets the machine selects. But sources told the magazine that an analyst might spend as little as 20 seconds checking the machine’s work before approving a bombing that could wipe out an entire family or even multiple families.
The IDF insists its use of AI is “misunderstood,” and that humans maintain oversight over decisions.
The Associated Press subsequently reported that Israel is using tools created by U.S. tech giants to help choose targets. Immediately after the Oct. 7 attack, as it was waging the deadliest phase of bombing, Israel’s use of Microsoft and OpenAI technology “skyrocketed,” per the AP.
Multiple sources within Israeli defense purportedly told the Guardian that the country is increasingly dependent on companies like Microsoft, Google, and Amazon to store and analyze intelligence. Microsoft has lucrative contracts with Israel and is OpenAI’s biggest investor.
In February, OpenAI told Fortune it does not partner with the Israeli military. OpenAI’s policies previously prohibited using its products for “military and warfare.” Its policies now prohibit using its products to create weapons or hurt people or property—unless it’s for “national security use cases that align with our mission.”
Last year, it confirmed that it had changed the policy to allow for military applications. In December, the company announced it is partnering with defense contractor Anduril Industries to develop counterdrone systems.
OpenAI did not respond to emailed inquiries sent Sunday. Microsoft has previously declined to comment or answer questions about working with the IDF, per the Guardian.
Much of this took place before Trump assumed office. When he swore that oath on Jan. 20, whatever chance there was that the U.S. would try to curtail American tech companies from selling and creating tools designed for slaughter effectively vanished.
Roping in AI
Using AI for war is not without controversy, even within companies creating the technology.
On Friday, Microsoft celebrated its 50th anniversary. During the event, which was livestreamed, two employees separately interrupted the festivities to protest its contract with Israel.
One approached the stage during AI CEO Mustafa Suleyman’s speech, shouting, “You claim that you care about using AI for good but Microsoft sells AI weapons to the Israeli military. Fifty-thousand people have died and Microsoft powers this genocide in our region.”
In a statement, a Microsoft spokeswoman told the Daily Dot via email, “We provide many avenues for all voices to be heard. Importantly, we ask that this be done in a way that does not cause a business disruption. If that happens, we ask participants to relocate. We are committed to ensuring our business practices uphold the highest standards.”
Insiders aren’t the only ones concerned about how AI is being used and developed. Anthis of the Sentience Institute, a thinktank that researches social and technological change, pointed out that a large majority support banning developing sentient AI, AI that’s smarter than humans, and slowing down progress.
“In terms of policy, we see that people are very concerned and generally opposed to advances in AI,” Anthis said.
Policymakers have similar concerns. Last year, the European Union passed the first comprehensive AI regulation. Its AI Act seeks to ensure AI is developed in accordance with certain values. To that end, various types of AI are prohibited, such as social scoring, untargeted scraping of facial images to compile databases with facial recognition software, and real time remote biometric identification by law enforcement under most circumstances.
The Biden administration also took on AI.
In 2023, then-President Joe Biden issued a sweeping executive order that sought to harness the AI industry in accordance with ethics and human rights. To that end, the order directed federal agencies to develop guidelines for regulating and tracking AI.
The following year, Biden created the AI Safety Institute Consortium, which brought together stakeholders with the goal of developing and deploying “safe and trustworthy” AI.
Biden’s moves angered people on both sides of the issue, with some saying it went too far and others that it didn’t go far enough.
To others, these were welcome first steps in the right direction.
“There’s no way around just putting in the work for this sort of thing, you know, building out regulatory infrastructure, making definitions, having standards,” Anthis said.
Now the U.S. is taking the opposite course. On his first day in office, Trump repealed Biden’s executive order. Days later, he issued his own: “Removing barriers to American leadership in artificial intelligence.” The order revoked policies and guidelines described as “barriers” to innovation, signaling that he has no plans to box in the AI industry.
Full speed ahead
Love it or hate it, there’s no denying that the great AI gold rush is well underway. An international economic research platform identified 94 so-called unicorns—startups that achieve valuations of $1 billion or more—in the AI space in 2024 alone.
Billions are being poured into developing AI and building infrastructure to support it. Microsoft, Google, Meta, Amazon and their partners plan to cumulatively spend between $60 and $100 billion on AI infrastructure this year alone. The day after Trump’s inauguration, OpenAI founder Sam Altman and SoftBank CEO Masayoshi Son joined him to announce “Stargate,” a 4-year, $500 billion project to build the data centers necessary to power AI.
These moves arguably disincentivize regulation, perhaps even more so to someone as famously anti-regulation as Trump.
“Microsoft powers this genocide in our region.”
Trump’s choice of David Sacks to serve as AI and Crypto Czar further convinced people that the administration has little appetite for creating guardrails around the industry. Sacks is a Silicon Valley venture capitalist who co-founded PayPal alongside Peter Thiel and Elon Musk. He purportedly has significant investments in AI and crypto companies.
Brad Carson is president and founder of Americans for Responsible Innovation (ARI), a policy group dedicated to AI. He’s an Iraq War veteran and also served as the Under Secretary of Defense for Personnel and Readiness under President Obama. Carson believes it’s “fair” to say Sacks has a “deregulatory sensibility.”
“It does seem like any attempt to regulate AI will meet a presumptive ‘no’ from him,” Carson told me.
Even more proof that the White House will be hands-off came in February when the U.S. declined to sign a pledge to promote responsible AI development. The United Kingdom was the only other country to also decline. China, the U.S.’s rival in the space, was among the over 60 nations that did sign the pledge.
The pledge was the culmination of conversations among world and industry leaders, including at two previous summits in South Korea and the U.K. Discussions had delved extensively into AI safety, so when a draft of the declaration leaked days prior, many were surprised to see that it did not mention the subject.
“For many of us, the most significant update there was the disappointing show of global governance and global responsibility coming from leaders at large on pushing safety issues,” said Hamza Chaudhry, AI and national security lead of Future of Life Institute.
To some, the final version was so diluted that it didn’t really matter if the U.S. signed it. To others, declining to sign signaled that the U.S. isn’t interested in engaging with the international community as humanity grapples with protecting human life and the planet from dangerous AI.
During the summit, Vice President JD Vance threatened foreign governments for “tightening the screws” on American tech companies.
Vance’s comments were widely seen as a swipe at the EU AI Act. It also signaled that the administration is solely focused on encouraging growth in AI and unconcerned with ensuring it’s ethical. During his speech, Vance declared that AI has the potential to usher in a “new industrial revolution, one on par with the invention of the steam engine.”
“The sort of emphasis on innovation above everything else you saw in the Vance speech was deeply troubling,” said Frank Pasquale, a professor at Cornell Law School and author of The Black Box Society: The Secret Algorithms That Control Money and Information.
Too soon to regulate?
People generally agree that some lines should not be crossed, such as machines being able to autonomously launch a nuclear attack. But beyond worst-case scenarios like a computer program deciding to destroy a city or even the entire planet, opinions vary widely.
Some believe that AI regulation is necessary—just not yet. The industry is in its infancy, they say, and there simply isn’t enough information or consensus.
“It’s developing so rapidly we don’t actually know what it will look like in two or five, much less 10 or 15 years. Therefore, any regulation you put in place today could be either ineffective or even counterproductive,” said former Undersecretary of Defense Carson.
Carson sees benefits of using AI for certain purposes, including for defense, such as improving logistics or analyzing intelligence. A computer can filter through millions of pages of data and intel in hundreds of languages at speeds no human could ever accomplish. It can streamline various tasks for maximum efficiency.
Carson doesn’t think that the tech has developed to the point where fully autonomous weapons exist, although he acknowledged that he doesn’t see the latest security briefings. His best guess is that it is currently possible to create an autonomous weapon with 50% accuracy—well below the threshold to comply with international rules of war. So is it really necessary to draft a treaty or legislation to address weapons that might not exist?
Still, even those who don’t think it’s necessarily prudent to implement sweeping rules now believe that the industry should be regulated before it’s too late. Anthis of the Sentience Institute pointed to social media as an example.
“Regulation in social media was not able to keep up with the pace of the proliferation of the technology, the development of algorithms that keep you scrolling all the time. And we’ve as a society, I think, lost our agency to some extent,” Anthis said.
He added that AI is “the fastest-moving technology in history.”
Reasons to regulate
Countries have an admitted interest in having the latest weaponry. But if the 20th century arms races taught us anything, it’s that once a weapon exists, it’s more likely to proliferate and more likely to be used.
In the first month of World War I, French soldiers attacked Germans with grenades containing tear gas. It was the first wartime chemical attack and is credited with launching the modern era of chemical warfare. Over 1 million would perish in chemical attacks in that war.
In the 1925 Geneva Protocol, signatory nations pledged not to use chemical or biological weapons. The 1993 Chemical Weapons Convention (CWC) prohibited developing, stockpiling, acquiring, transferring, or producing such weapons and required those in existence be destroyed. Nearly every nation in the world has signed and ratified it.
In 2013, almost exactly a century since a French soldier launched that first tear gas-armed grenade, over 1,000 civilians passed away in a chemical attack on Damascus, Syria.
Once the genie is out of the bottle, it can be impossible to put it back.
Future of Life Institute works to guide the tech industry away from large-scale risks and toward innovations that benefit, rather than harm, life. The nonprofit advocates for an international agreement to ensure autonomous systems can’t make the decision to launch nukes. In 2017, it released Slaughterbots, a short film about swarms of autonomous drones. It was futuristic at the time. Eight years on, it’s nearing reality.
The institute isn’t only focused on weapons. It’s also concerned about integrating AI into decisionmaking, like Israel has reportedly done to select targets in Gaza.
Chaudhry of Future of Life Institute believes that it’s time to take some steps to make sure AI weapons and systems don’t advance to a point of no return. He pointed out that even if humans are required to review AI’s determination that an individual is an enemy combatant, for instance, a huge body of evidence demonstrates that people are less likely to question a machine than they are another person.
At the current pace of development, Chaudhry sees it as likely the U.S. and China will have autonomous weapons within the next decade. In the event of a war in which both sides have such weapons, he stressed that it will be machine versus machine.
“It’s AI fighting AI, and that’s happening at machine speeds, which make it very difficult to verify, very difficult to authenticate certain decisions. So we have this general feeling of governments losing meaningful human control over decisions and conflict,” Chaudhry said.
“The best way to progress is to pump the brakes a little bit, engage in some international coordination, set up some basic guardrails, and then compete responsibly,” he added.
Send Hi-Res story tips and suggestions here.
Internet culture is chaotic—but we’ll break it down for you in one daily email. Sign up for the Daily Dot’s web_crawlr newsletter here. You’ll get the best (and worst) of the internet straight into your inbox.