In the End, the Machines Will Win

It is easy to promote the idea of an "Internet of Things" with artificial intelligence as a great business opportunity. But we need to look seriously at the problems that can arise. The Internet of Things will be heavily funded by the military. The objective of this IoT will be to kill humans.

Yes, there are serious industrial accidents, such as when Amazon's robot set off bear repellant that put workers in hospital. Which is bad, but of course we believe that we can improve the programming and control to remove such errors.

But what if the robot is deliberately designed to kill? Such as drones with assault rifles or machine pistols attached? Well, as long as they remain under human control, right?

But increasing this network, this "Internet of Things", will have their own expert systems, their own artificial intelligence, incorporated into the "thing", or, in this case the weapon. Such as in South Korea where they are building an AI expert systems into robots. Killer AI robots.

Once we start building AI systems with neural networks they start to develop, unconsciously, their own ideologies. Such as when Microsoft developed a chatbot, which turned into a Nazi.

Our combination of an Internet of Things, expert systems, and the ever increasing performance of firepower and miniaturisation has a very tragic trajectory. One which will be outside our control. In this situation, in the end, only the machines will win.

I see there is an intrinsic danger in developing networked artificial expert systems with weaponry. I do agree with the advocates - that such weapons are more precise, cause less collateral damage, and so forth. However, by their very autonomy they are outside of our control, and by the fact that they are networked they have a vector that can be exploited and accessed by third parties (even OpenBSD, the world's most secure operating system, has had a few remote security exploits), and due to technological trajectories they will become more powerful.

If any of these statements are untrue, I would welcome to be corrected. I really would!

The underlying problem is that human moral reasoning is amplified by our technological capability - which can be positive or negative. If you like the moral reasoning determines the mathematical sign, our own actions are a number, indicating the force of the action, and technology is a multiplier (enhancing) or divisor (restricting) or actions. Over time it becomes possible for a person to engage in actions well and truly above their personal ability.

The prospect of dangers arising from technology have some important protections. One is institutional limitations, where the use of a technology is bounded by access restrictions and oversight (c.f., the operations required for the "nuclear briefcase"). Despite the wishes of some extreme political libertarians, we can't just go down to the local gunshop and buy a nuke (although there was a rather satirical science fiction book about this in the eighties, Dad's Nuke), and use of advanced destructive technology is usually subject to the decision of several people rather than one. Herbert Simon's notion "bounded rationality" is a good theoretical foundation for these sorts of limits.

The other, more controversially, is communication and understanding, which can be technologically mediated. Following contemporary linguistic pragmatic philosophy (Habermas, Otto-Apel) I am of the opinion that communication understanding is a foundation for empathy and solidarity both within and between species. The "mutually assured destruction" doctrine of the past was overcome by both mass protest and political leadership that saw people from different nations and with a different political and economic system as still deserving life as we deserve life. As a result, it seems that we dodged the "bullet" of nuclear annihilation, and maybe we'll do the same for global warming as well.

But what would an autonomous weaponised expert system care about such things?

Commenting on this Story will be automatically closed on February 26, 2019.