An engineer who quit over military drone project warns AI might also accidentally start a war
- An ancient engineer at Google named Laura Nolan, who resigned from a secret project Maven autonomous drone programme warned that humanity needs to put international laws in place to ban killer robots from battle before they cause “mass cruelty”.
- Laura Nolan told that completely autonomous robotic weapons could outcomes in mass cruelty, without proper experiment.
- The project Maven was Google’s contract with US Military Department to extend its drone by using AI. The project was canned by google after mass employee scurry.
- Laura Nolan told there must be an enactment in place banning autonomous AI weapons from the battleground such as there is for chemical weapons.
A new generation of autonomous weapons or “killer robots” could accidentally start a war or cause mass atrocities, a former top Google software engineer, named Laura Nolan has warned.
Laura Nolan, who resigned from Google in protest at being sent to work on a project which increases US military drone technology dramatically, has called for all AI killing machines not operated by humans to be banned.
Nolan told killer robots were not guided by human remote control, should be outlawed by the same type of international treaty that bans chemical weapons.
Unlike drones, which are controlled by military teams which can be thousands of miles away from where the flying weapon is being engaged, Nolan told AI robots have the ability to do “calamitous things that they were not originally programmed for”.
There is no proposal that Google is associated with the development of autonomous weapons systems. A UN panel of government experts argued autonomous weapons and found Google to be omitting AI for use in weapons systems and engaging in good practice.
Nolan, who has joined the Campaign to Stop Killer Robots and has briefed UN diplomats in New York and Geneva over the dangers posed by autonomous weapons, said: “The likelihood of a disaster is in proportion to how many of these machines will be in a particular area at once. What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed”.
“There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.”
Google appointed Nolan, a computer science graduate from Trinity College Dublin, to work in the Project Maven in 2017 after she had been engaged by the tech giant for four years, becoming one of its top software engineers in Ireland.
She said she became “more and more ethically concerned” on her role in the Maven project & she was worried about helping the US Department of Defence by drastically accelerating drone video recognition technology.
Instead of using large numbers of military operatives to spool through hours and hours of drone video footage of potential enemy targets, Nolan and others were demanded to build an AI system where AI technology machines could differentiate between people and objects at an infinitely faster rate.
Google permitted the Project Maven contract to stop in March 2019 after more than 3,000 of its employees signed an application for protesting against the company’s involvement.
According to Laura Nolan, her work was as a site reliability engineer. Although the workings of Nolan were not directly involved in video footage recognition still she realised that indirectly she was working by helping the US Military to improve its AI robots that would ultimately lead to more people targeted and killed by US military in places like Afghanistan. This improves AI video footage workings also raised the fear of the next step US military might be readily taken like enabling AI weapons and these might increase the chances of illegal killings even under laws of hostilities. Nolan was afraid that if the military gets access to AI weapons, any factors can mess the system of weapons which might lead to unexpected radar signals or unusual weather or might result into killing hundreds and thousands of innocents.
Though she left from Project Maven, Nolan has foretold that autonomous AI weapons which are being developed pose a far greater risk to the human race than remote-controlled drones.
She described how external forces ranging from changing weather systems to machines being unable to work out complex human behaviour might throw killer robots off course, with possibly deadly consequences.
“You could have a scenario where autonomous robots have been emerged to do a job, tackle unexpected radar signals in an area they are searching; there could be weather that was not factored into its software or they come across a group of armed men who appear to be rebel enemies but in fact are out with guns hunting for food. The machine doesn’t have the discernment or common sense that the human touch has”.
“The other scary thing about these autonomous war systems is that you can only really test them by deploying them in a real combat zone. Maybe that’s happening with the Russians at present in Syria, who knows? What we do know is that at the UN Russia has opposed any treaty let alone ban on these weapons by the way.
“If you are testing a machine that is making its own decisions about the world around it then it has to be in real-time. Besides, how do you train a system that runs only on software on how to detect fine human behaviour or understand the difference between hunters and mutineer? How does the killing machine out there on its own flying about distinguishing between the 18-year-old militant and the 18-year-old who is hunting for birds?”
The ability to convert military drones, for example, autonomous non-human guided weapons, “is just a software problem these days and one that can be relatively easily solved”, said Nolan.
She said she wanted the Irish government to take a more strong line in supporting a ban on such weapons.
“I am not saying that missile-guided systems or anti-missile defence systems should be banned. They all are under full human control and someone is ultimately responsible. These autonomous AI weapons, however, are ethical as well as a technological step-change in warfare. Very few people are talking about this but if we are not careful one or more of these weapons, these killer robots, could accidentally start a flash war, destroy a nuclear power station and cause mass atrocities.”
Nolan, the ex-engineer at Google is requesting all countries to completely ban workings on autonomous killing robots because that is totally against the ethics. There is a very specific number of people taking on this issue seriously and if we fail to stop these AI weapons, these killers robots could accidentally start a war or destroy a power station and cause mass killing in the society. So, better to be safe than sorry.
Some Autonomous Killer Robots Weapon Technology?
Some of the autonomous weapons being developed by military powers around the world include:
- The US navy’s AN-2 Anaconda gunboat, which is being advanced as a “completely autonomous watercraft prepared with artificial intelligence (AI) capabilities” and can “stick around in an area for a long period of time without human interference”.
- Russia’s T-14 Armata tank, which is being worked on to make it completely unmanned and autonomous. It is being designed to respond to incoming fire independent of any tank crew inside.
- The US Pentagon has hailed the Sea Hunter autonomous warship as a major advance in robotic warfare. An unarmed 40 metre-long prototype has been launched that can cruise the ocean’s surface without any crew for two to three months at a time.