View Single Post
Old 03-12-23, 06:14   #15
Ladybbird
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,682
Thanks: 27,647
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies Re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

‘Machines Set Loose to Slaughter’: The Dangerous Rise of Military AI

Autonomous machines capable of deadly force are increasingly prevalent in modern warfare, despite numerous ethical concerns. Is there anything we can do to halt the advance of the killer robots?

BBC 3 DEC 2023














The video is stark. Two menacing men stand next to a white van in a field, holding remote controls. They open the van’s back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave.

In a few seconds, we cut to a college classroom. The killer robots flood in through windows and vents. The students scream in terror, trapped inside, as the drones attack with deadly force.

The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away.


Terrorists could easily deploy them. And existing defences are weak or nonexistent.


Some military experts argued that Slaughterbots – which was made by the Future of Life Institute, an organisation researching existential threats to humanity – sensationalised a serious problem, stoking fear where calm reflection was required. But when it comes to the future of war, the line between science fiction and industrial fact is often blurry. The US air force has predicted a future in which “Swat teams will send mechanical insects equipped with video cameras to creep inside a building during a hostage standoff”.

One “microsystems collaborative” has already released Octoroach, an “extremely small robot with a camera and radio transmitter that can cover up to 100 metres on the ground”. It is only one of many “biomimetic”, or nature-imitating, weapons that are on the horizon.

Who knows how many other noxious creatures are now models for avant garde military theorists. A recent novel by PW Singer and August Cole, set in a near future in which the US is at war with China and Russia, presented a kaleidoscopic vision of autonomous drones, lasers and hijacked satellites. The book cannot be written off as a techno-military fantasy: it includes hundreds of footnotes documenting the development of each piece of hardware and software it describes.

Advances in the modelling of robotic killing machines are no less disturbing. A Russian science fiction story from the 60s, Crabs on the Island, described a kind of Hunger Games for AIs, in which robots would battle one another for resources. Losers would be scrapped and winners would spawn, until some evolved to be the best killing machines.

When a leading computer scientist mentioned a similar scenario to the US’s Defense Advanced Research Projects Agency (Darpa), calling it a “robot Jurassic Park”, a leader there called it “feasible”. It doesn’t take much reflection to realise that such an experiment has the potential to go wildly out of control. Expense is the chief impediment to a great power experimenting with such potentially destructive machines. Software modelling may eliminate even that barrier, allowing virtual battle-tested simulations to inspire future military investments.

In the past, nation states have come together to prohibit particularly gruesome or terrifying new weapons. By the mid-20th century, international conventions banned biological and chemical weapons. The community of nations has forbidden the use of blinding-laser technology, too. A robust network of NGOs has successfully urged the UN to convene member states to agree to a similar ban on killer robots and other weapons that can act on their own, without direct human control, to destroy a target (also known as lethal autonomous weapon systems, or Laws).

And while there has been debate about the definition of such technology, we can all imagine some particularly terrifying kinds of weapons that all states should agree never to make or deploy. A drone that gradually heated enemy soldiers to death would violate international conventions against torture; sonic weapons designed to wreck an enemy’s hearing or balance should merit similar treatment. A country that designed and used such weapons should be exiled from the international community.

In the abstract, we can probably agree that ostracism – and more severe punishment – is also merited for the designers and users of killer robots. The very idea of a machine set loose to slaughter is chilling. And yet some of the world’s largest militaries seem to be creeping toward developing such weapons, by pursuing a logic of deterrence: they fear being crushed by rivals’ AI if they can’t unleash an equally potent force.

The key to solving such an intractable arms race may lie less in global treaties than in a cautionary rethinking of what martial AI may be used for. As “war comes home”, deployment of military-grade force within countries such as the US and China is a stark warning to their citizens: whatever technologies of control and destruction you allow your government to buy for use abroad now may well be used against you in the future.

Are killer robots as horrific as biological weapons? Not necessarily, argue some establishment military theorists and computer scientists. According to Michael Schmitt of the US Naval War College, military robots could police the skies to ensure that a slaughter like Saddam Hussein’s killing of Kurds and Marsh Arabs could not happen again.

Ronald Arkin of the Georgia Institute of Technology believes that autonomous weapon systems may “reduce man’s inhumanity to man through technology”, since a robot will not be subject to all-too-human fits of anger, sadism or cruelty. He has proposed taking humans out of the loop of decisions about targeting, while coding ethical constraints into robots.

Arkin has also developed target classification to protect sites such as hospitals and schools.

In theory, a preference for controlled machine violence rather than unpredictable human violence might seem reasonable. Massacres that take place during war often seem to be rooted in irrational emotion. Yet we often reserve our deepest condemnation not for violence done in the heat of passion, but for the premeditated murderer who coolly planned his attack.

The history of warfare offers many examples of more carefully planned massacres. And surely any robotic weapons system is likely to be designed with some kind of override feature, which would be controlled by human operators, subject to all the normal human passions and irrationality.

Any attempt to code law and ethics into killer robots raises enormous practical difficulties. Computer science professor Noel Sharkey has argued that it is impossible to programme a robot warrior with reactions to the infinite array of situations that could arise in the heat of conflict. Like an autonomous car rendered helpless by snow interfering with its sensors, an autonomous weapon system in the fog of war is dangerous.

Most soldiers would testify that the everyday experience of war is long stretches of boredom punctuated by sudden, terrifying spells of disorder. Standardising accounts of such incidents, in order to guide robotic weapons, might be impossible. Machine learning has worked best where there is a massive dataset with clearly understood examples of good and bad, right and wrong.

For example, credit card companies have improved fraud detection mechanisms with constant analyses of hundreds of millions of transactions, where false negatives and false positives are easily labelled with nearly 100% accuracy.

Would it be possible to “datafy” the experiences of soldiers in Iraq, deciding whether to fire at ambiguous enemies? Even if it were, how relevant would such a dataset be for occupations of, say, Sudan or Yemen (two of the many nations with some kind of US military presence)?








Given these difficulties, it is hard to avoid the conclusion that the idea of ethical robotic killing machines is unrealistic, and all too likely to support dangerous fantasies of pushbutton wars and guiltless slaughters


Slaughterbots




New Robot Makes Soldiers Obsolete



Ladybbird is online now   Reply With Quote