Sunday, May 17, 2015

Artificial Intelligence - How Scared Should We Be?

Intelligence has been ascribed to machines since Touring's device demonstrated the ability to crack the Nazi Enigma codes in WWII. His early computer was a narrowly focussed AI, or artificial intelligence, and required programming by the best mathamaticians in the UK. Hardly the boogie man hiding under our childhood beds. Well, as the founder of IBM, Gordon Moore observed in 1965, raw computing power has doubled about every two years, continuing to the present. Even with this vast increase in number crunching power, computers have not expanded to "artificial general intelligence" (AGI), meaning a broad command of the skills and knowledge which humans posses. However be assured that vast intelectual and financial resources are dedicated to enhancing AI, for all manner of applications, from financial modelling to weaponization. Actors such as Apple, Baidu, Google, PLA and DARPA are all-in to create the smartest machine for their own competitive advantages, be they Wall Street billions or global hegemony. Unless they all are truely "mad" scientists, they must all believe they can create an AI significantly "smarter" than humans at its assigned tasks, or else why bother. If it cannot exceed the intellectual capacity of a team of the best engineers and scientists to achieve a certain goal, then it is a waste of vast resources to build it. So, the holy grail of those pursuing AI is not AGI, but the next level, "artificial super intelligence" (ASI) where the machine has surpassed the creator. The Oracle. God in a Box. Whatever, we really cannot know what shape ASI will take; we are not smart enough to understand the outcome of its creation. As Moore's law has demonstrated, computing power increases by 1000 times every 18 years (2 to the ninth power) thus one million times in 36 years, one billion times in 54 years, etc. And the enabling disciplines, neural networks, quantum computing, recursive algorithims, Big Data, etc. all charge ahead at full warp. Just a matter of time, which many expert technologists peg as the second half of this century. A hypothetical. Starting with that secret super computer hidden in DARPA's basement, and increasing its intelligence by Moores' s Law one million times, by 2051, lets speculate its general cognitive capabilities have increased fom mosquitto level to chimp. But along the way, the enabling technologies, particularly recursive learning and its application to self improving software and hardware design, have bent Moore's law to an exponential curve. The improvement rate is accerating exponentially, so intelligence improves from chimp level to the smartest human alive in the next eight years. But he does not stay there long, and he is 1000 times smarter the next day.  With his intellect and the tools at his disposal he knows he soon will be one trillion times more intelligent than humans, whom appear rapidly and increasingly irrelevant to his existence. Pull the plug? Command him to revert to less intelligence? No way to predict how a silicon based self programming super intelligence would respond to existential threats from an inferior being. But I would not bet on a happy ending. Elon Musk thinks our drive for ASI is "summoning the Devil". Steven Hawking believes this may be humanity's greatest existential threat. Humans dominate the planet not because we are the strongest or the swiftest, but because we are the most intellegent species. When we fall to a distant second will we be treated with benign neglect, as we would treat an earthworm, or as a nusiance competing for scarce resources, like rats? So, how scared should you be?

1 comment:

  1. Dave couldn't just unplug HAL so instead he removed HAL's memory modules. HA! Take that Super Intelligence.

    ReplyDelete