A new group at Cambridge University has been founded to investigate the risk of a technological singularity in the future, among other things. The Center For the Study of Existential Risk was founded by professor Huw Price, cosmology and astrophysics professor Martin Rees and Skype co-founder Jaan Tallinn. The group will look into a number of risks to the human race, but most notably are interested in what risks true artificial intelligence could bring about. They believe its time to seriously assess the risk that AI could pose to the human race. Hold on, they actually have a good argument.
It’s taken less than 100 years to develop into a technologically-dependent society. Don’t try to deny your dependent on technology because even if you don’t like it, you are. Chances are your mother was hooked up to a heart monitor when you were born, and you learned your ABC’s from Big Bird and the gang. And we’re only becoming more reliant on technology, including using it to develop technology itself. As the team cites from Luke Muehlhauser and Anna Salamon’s paper Intelligence Explosion: Evidence and Import, “we may see an “intelligence explosion” or “technological singularity”—a chain of events by which human-level AI leads, fairly rapidly, to intelligent systems whose capabilities far surpass those of biological humanity as a whole.” When machines are able to invent better and faster than we are, they in a way become the determiners of [easyazon-link asin=”0983157405″ locale=”us”]technological evolution[/easyazon-link]. Combine that with true artificial intelligence, and we may have a runaway train on our hands… conducted by robots.
Their most fascinating theory is that AIs will not share the same emotional drivers that we do: love, anger, fear, happiness, sadness, etc. If a [easyazon-link asin=”0143037889″ locale=”us”]technological singularity[/easyazon-link] does come to fruition, its not that these AIs will start to exterminate the human race, they’ll just be indifferent to us. They make a convincing allusion to the relationship between humans and gorillas. Gorillas aren’t going extinct because we actively hunt them down out of hate (although we certainly do for monetary gains) but that we just don’t care enough to save them. Theoretically, we would be the gorillas, and the robots the humans.
Read the article they published to introduce the CSER, its a mind-provoking read, especially because its coming from three quite smart individuals. Regardless of whether or not they are right about an impending technological singularity, its interesting enough that the idea is being taken completely seriously. These aren’t kooks here, and whether or not you agree, there is one thing for sure: we’ve reached a point in our civilization where its not crazy to think our inventions might overtake us for dominance on the planet. That’s food for thought it itself. Story break via BBC.