THE LARGEST conventional military attack in Europe since World War II has suddenly brought the stark realities of war into our homes and lives.
The Russian invasion of Ukraine, and often indiscriminate attack of civilians, has caused a humanitarian, economic and geopolitical fallout of enormous proportions, leaving many people dead, millions displaced and valuable infrastructure destroyed.
Although many experts described the weapon technology used by Russia as outdated, the loss of human lives and the destruction caused to infrastructure by drones, fighter planes, artillery and missiles is enormous. Whether cutting edge or conventional technology, modern weaponry is extremely destructive and dangerous.
But this is still nothing compared to the technology of tomorrow, in particular “slaughterbots”, also called “killer robots” or “lethal autonomous weapon systems” (LAWS). Slaughterbots are small, cheap and lightweight weapon systems that are incredibly versatile and powerful and can fit into your hand. They use artificial intelligence (AI) to independently identify, select, and kill human targets without any human intervention.
Slaughterbots can operate independently or in swarms in constant communication with one another, and are therefore different from the current unmanned military drones where the decisions are made remotely by a human operator operating under supervision. In the case of lethal autonomous weapons the decision is independently made by a software algorithm without any human intervention at all.
Slaughterbots are carefully pre-programmed with very specific “target profiles” that must be identified and killed or destroyed. When the weapon is deployed, the AI will search for the target profile using the data from several sensors such as facial and image recognition, location, and thermal signature. When the weapon encounters a person that matches the target profile according to its algorithm, it fires and kills the person.
In the now famous video, “Slaughterbots”, published on YouTube in 2017, Stuart Russell, a professor in computer science at Berkeley in the US, gave the world a stark warning of what was coming with regard to AI-powered lethal autonomous weapons. It presented a dramatised future scenario where swarms of inexpensive palm-sized microdrones use artificial intelligence, facial recognition and shaped explosives to assassinate political opponents based on preprogrammed criteria and algorithms such as a face or a particular military uniform.
The film, watched by millions all over the world, was thoroughly researched and based on current integrated and miniaturised military technologies.
Unfortunately, the future has arrived and weapons that can autonomously select, target, and kill humans are already a reality. China in 2020 unveiled a terrifying new war machine that can unleash swarms of killer drones from a weapon system mounted to the back of a truck or that can be dropped from helicopters. Each of the drones carries highly-explosive charges designed to tear through tanks and destroy armoured vehicles.
A March, 2021 report by the United Nations Panel of Experts on Libya documented the use of a lethal autonomous weapon system hunting down retreating soldiers. The Kargu drones made by Turkey’s STM (Savunma Teknolojileri Mühendislik), are small rotary wing attack-drones that provide “precision strike capabilities” for troops on the ground. The swarm of drones acting in unison are able to destroy a whole company of soldiers at the same time.
In May, 2021 the Israel Defence Forces (IDF) used the world’s first AI-guided combat drone swarm to locate, identify and attack Hamas militants in Gaza. Since then several reports of swarms of slaughterbots that have been used on battlefields around the world have surfaced. Apparently, at least 14 countries currently have killer AI drones. The increasing usage of autonomous lethal weapons all over the world should certainly set off some alarm bells with every rational person.
It is for this very reason that the International Committee of the Red Cross (ICRC), viewed by many as the organisation that is the “custodian of the law of war”, has pleaded that states adopt new legally binding rules to regulate lethal autonomous weapons used for targeting humans, just as was done with biological weapons, cluster munitions and landmines in the past. The ICRC recommended that: 1) autonomous weapons that are designed or used to target humans (the so-called slaughterbots or killerbots) should be prohibited; 2) autonomous weapons with a high degree of unpredictable behaviour should be restricted; and 3) any other type of autonomous weapon should be combined with the requirement of human control.
Great expectations were therefore place on the United Nations Geneva Conference in December, last year. For the first time, the bulk of 125 nations that belong to the United Nations’ Convention on Certain Conventional Weapons (CCW) agreed that they wanted new laws to be introduced on killer robots.
However, some countries currently developing these weapons, such as the US, Russia, China, India and Australia were in opposition, which made a unilateral agreement impossible. The United Kingdom and several other nations also objected. It was clear that the forum could not address the urgent threats posed by the use of emerging technologies such as artificial intelligence in lethal autonomous weapons.
The problem is that killer drones equipped with a weapon like those being manufactured by STM are becoming fairly inexpensive to buy and also relatively easy to manufacture in mass. There is therefore a real threat that it may soon end up in the hands of civilians, criminal groups and terrorists that will use slaughterbots to assassinate people that are in their way. Killerbots may just in future become the weapon of choice for anyone who wants to kill somebody. Even if someone is protected by bodyguards, the killer drones could still easily fly in through a bedroom window and kill the person while sleeping.
With automation transforming almost every industry from agriculture to weapons manufacturing, it was only a matter of time before the human element was replaced by AI in drone-striking. But when the pace of emerging technology is outpacing the rate of diplomatic talks or advanced technology lands in the hands of criminals and cartels, then we are in serious trouble. A future with killer robots equipped with facial recognition, robots that are used for robberies, and politically motivated mass executions by killer drones is certainly not a future that we would desire.
This would entail a dystopian era where people can be killed with absolute zero accountability, where war will be ruled by machines and where algorithms will decide who lives and who dies.
Because of this grave threat to national and global security, the United Nations Secretary-General António Guterres said: “Machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant, and should be prohibited by international law.” Our future is in our hands.
Professor Louis CH Fourie is an extraordinary Professor at the University of the Western Cape.
BUSINESS REPORT ONLINE