A new, chilling warning from the United Nations has thrust self-driving vehicles into the spotlight as potential tools for terrorism. A recent U.N. report highlights the risk that autonomous cars could be hijacked remotely by terrorists to execute mass-casualty attacks, functioning as “slaughterbots” without the need for a human suicide bomber.
This alarming prospect raises critical questions about the likelihood of such vehicles being weaponized, the legal and technical challenges of preventing these attacks, and the implications for national security and public safety.
The concept of self-driving cars as remote-controlled bombs is not science fiction but a plausible evolution of existing terrorist tactics. Autonomous vehicles, equipped with advanced artificial intelligence (AI) and connectivity, are increasingly common on roads worldwide. Companies like Tesla, Waymo, and others have deployed fleets of semi-autonomous and fully autonomous vehicles, with millions of miles logged in testing and commercial use. However, their reliance on software, sensors, and internet connectivity makes them vulnerable to cyberattacks.
The U.N. report warns that terrorists could exploit these vulnerabilities to seize control of vehicles, load them with explosives, and direct them toward crowded targets like pedestrian zones, markets, or critical infrastructure.
The precedent for vehicle-based attacks is well-established. The 2016 Nice truck attack, which killed 86 people when a driver plowed through a Bastille Day crowd, demonstrated the devastating potential of vehicles as weapons. Unlike traditional suicide bombings, which require a willing perpetrator, remotely hijacked autonomous cars eliminate the need for human sacrifice, lowering the psychological and logistical barriers for terrorist groups. The Islamic State’s use of explosive-laden vehicles in Iraq and Syria, often guided by minimal human intervention, foreshadows how AI-driven cars could be repurposed. A single hacked vehicle, packed with explosives and navigated to a high-density target, could rival the destructive impact of a conventional bomb while evading traditional counterterrorism measures.
From a technical perspective, the feasibility of such attacks hinges on the security of autonomous vehicle systems. Self-driving cars rely on a complex ecosystem of software, including AI algorithms, GPS, LiDAR, radar, and vehicle-to-everything (V2X) communication. Cybersecurity experts have long warned of vulnerabilities in these systems. In 2015, hackers remotely took control of a Jeep Cherokee, manipulating its brakes and steering, exposing the risks of connected vehicles. More recently, researchers demonstrated exploits in Tesla’s Autopilot system, tricking it into misinterpreting road signs or obstacles. The U.N. report underscores that a sophisticated actor—whether a state-sponsored group or a tech-savvy terrorist cell—could infiltrate these systems to override safety protocols and weaponize a vehicle.
The likelihood of such attacks depends on several factors. First, terrorist groups must possess the technical expertise to execute a cyberattack on a self-driving car. While groups like al-Qaeda and the Islamic State have historically relied on low-tech methods, their growing interest in cyber capabilities is evident. The U.N. notes that jihadist forums have discussed hacking AI systems, and state actors like Iran or North Korea could provide technical support to proxies. Second, access to explosives or chemical agents is critical to maximizing impact. The global proliferation of improvised explosive devices (IEDs) suggests this is not a significant barrier. Finally, the target environment matters. Urban areas with dense populations and lax vehicle security—such as open parking lots or ride-sharing hubs—are prime candidates for attacks.
Legally, the weaponization of self-driving cars poses a nightmare scenario for liability and prevention. Current laws struggle to assign responsibility for autonomous vehicle accidents, let alone deliberate attacks. In the U.S., the National Highway Traffic Safety Administration (NHTSA) regulates self-driving cars, but no federal framework explicitly addresses their use as weapons. State laws, like those in California and Arizona, permit autonomous vehicle testing but lack provisions for terrorism-related misuse. If a hacked car causes a mass-casualty event, liability could fall on the manufacturer, software developer, or even the vehicle owner, depending on the breach’s nature. For instance, a defect in the car’s cybersecurity could implicate the manufacturer under product liability laws, while an owner’s failure to update software might shift blame to them.
The 2018 death of Elaine Herzberg, the first pedestrian killed by an autonomous Uber vehicle in Tempe, Arizona, illustrates the legal complexities. The NTSB criticized Uber’s inadequate safety protocols, and the company settled with Herzberg’s family, but no criminal charges were filed. A terrorist attack using a hacked car would amplify these issues, with victims’ families potentially suing multiple parties—manufacturers, operators, or even government regulators—for negligence. However, proving causation in court would be daunting, as attackers could obscure their tracks through encrypted networks or proxy servers. The U.N. report emphasizes that existing legal frameworks are ill-equipped to deter or punish such acts, urging nations to develop international standards for AI vehicle security.
This threat underscores the need for robust national security measures balanced against innovation. Self-driving cars promise economic and safety benefits—NHTSA estimates that 94% of crashes involve human error, which autonomy could reduce—but their vulnerabilities demand proactive defense. The Trump administration’s focus on deregulation has spurred autonomous vehicle development, but it must pair this with stringent cybersecurity mandates. Public-private partnerships, like those between the Department of Homeland Security and automakers, could drive the adoption of encryption, intrusion detection systems, and mandatory software updates. Internationally, cooperation through NATO or Interpol is essential to counter state-sponsored actors who might weaponize AI vehicles against Western targets.
Preventing these attacks requires a multi-layered approach. Automakers must prioritize cybersecurity, embedding “defense-in-depth” principles like those used in military systems. Real-time monitoring for unauthorized access, coupled with kill switches to disable compromised vehicles, could mitigate risks. Law enforcement must enhance intelligence-sharing to detect terrorist plots early, while urban planners should consider physical barriers in high-risk areas, as seen in European cities post-Nice. Public awareness campaigns, akin to “see something, say something,” could encourage reporting of suspicious vehicle activity, such as unattended autonomous cars in sensitive locations.
The likelihood of self-driving vehicles being used as remote-controlled bombs is not immediate but grows with the proliferation of autonomous technology. The U.N. report cites no specific incidents as of June 2025, but the convergence of terrorist intent, cyber capabilities, and vulnerable AI systems makes it a credible threat. Unlike suicide bombings, which rely on human resolve, this tactic offers attackers anonymity and scalability, amplifying its appeal. While the risk remains speculative, the 2016 Nice attack and ongoing jihadist interest in vehicle-based terrorism suggest it’s a matter of when, not if, such an attempt will occur.
For policymakers, the challenge is clear: safeguard innovation without stifling it. For the public, the specter of AI cars as weapons erodes trust in a technology meant to save lives. As self-driving vehicles become ubiquitous, the race is on to secure them against those who would turn tools of progress into instruments of terror. The stakes—measured in lives, infrastructure, and societal resilience—could not be higher.