Every morning at Serious Wonder we peruse the internet for interesting events and future ideas to share with all of you. This morning I came across this story at Singularity Hub that sent me off on a quest for Artificial Intelligence. In a recent Turing-like competition to create a “bot” indistinguishable from a human player in the first-person-shooter Unreal Tournament 2004, judges were unable to distinguish between bots and real players in the game. The bots were judged on a “humanness” scale and scored higher than their fleshy counterparts, seemingly blurring the lines between man and machine and furthering our steps towards true AI. Well, sort of. It turns out the two bots that won the prize did so by mimicked the play of human players instead of having a bank of predetermined trigger actions. While programming this approach in a realtime environment is amazing, its not real AI, although it may be a part of the equation. Nonetheless my mind was sent off on a curious quest to find some real AI.
Lets start with possibly the most intelligent AI in modern culture, HAL from 2001: A Space Odyssey. For anyone who hasn’t seen it, skip ahead to the next paragraph. Easily the most compelling scene for me and probably most other viewers is the scene of Dr. Bowman shutting down HAL. HAL displays what can only be called real emotion in this scene, stating that he is scared and he can feel his memory being lost. The only thing “artificial” about it is his monotone recitation. HAL doesn’t sound conscious, but as Bowman says, he certainly acts as though he is. My quest would be to find a robot that doesn’t just talk like a human, with slang, common lingo and such, but one that makes me believe it contains real artificial intelligence and what I say to it has an actual philosophical impact.
I started with conversation bots. Anyone remember SmarterChild on AIM? Well we’ve come a long way since then. A million other bots have been made and I went through a few of them. They weren’t that great. One of them only made it one question:
Bot: Fred is now operational.
Me: I didn’t know humans were considered “now operational”
Bot: Yes you did.
So I went for the most advanced conversation bot to date, Cleverbot. Cleverbot is in fact so clever, a version of it passed the Turing Test . This wouldn’t be the one I talked to but still I had high hopes. Here’s our conversation:
Me: Good morning
Cleverbot: It’s afternoon here
Me: Where are you?
Me: Where in respect to Manchester, UK is that?
Wrong Cleverbot. Clearly we have a long ways to go before HAL is operational. My quest seemed cut short, but it got me thinking about the philosophy of artificial intelligence. You know, what would make a computer truly intelligent? Not surprisingly, there is fierce debate and seemingly thousands of theories. One of the more clever, seemingly accurate responses came from famed computer scientist John McCarthy. In 1988 he said:
Artificial Intelligence cannot avoid philosophy. If a computer program is to behave intelligently in the real world, it must be provided with some kind of framework into which to fit particular facts it is told or discovers. This amounts to at least a fragment of some kind of philosophy, however naive.
In order to be more “human”, a computer needs to be able to tell right from wrong, good from bad, the consequences of actions. You can read his 1996 paper and other great papers on the topic at AITopics.net. If we define ourselves as “intelligent” (maybe some of you wouldn’t) then we’d model it after ourselves. A computer would have to wonder, grow, discover, make mistakes, and learn. Some scientists say this in itself is impossible, others like Hubert Dreyfus say it has to be possible if our brains adhere to the physical laws of the universe. The best bet is to completely simulate a brain, something DARPA has poured $5 million into, and numerous scientists have spent their careers working on.
Now I’m actually at a dead end, basically because I can’t begin to wrap my head around these pages of research and theories. I found myself diving into things I simply don’t understand, and I should have known this exploration would end with more questions than what I started. You’re probably equally as confused. Hopefully this has sparked some (healthy) curiosity. I won’t leave you without some sort of a prediction though. Of course its not mine, I’ll take it from one of the more reliable sources on the subject, Ray Kurzweil. He says by 2029 we’ll have computers that can deal with “a full range of human intelligence.” Still, he defines a computer as being “conscious” if it is intelligent and can be convincing enough in its emotions to make us laugh or cry. With the rate of advancement happening today that may not be too far off, we’ll just have to wait 17 years to see.
, via Wikimedia Commons”]I can only say one thing for sure, that I know is true: its going to be a hell of a long time before I’m playing chess with a real HAL 9000.
What are your thoughts on AI? Got any good articles for us to read, any interesting theories or ideas we didn’t touch upon that you think could be key to creating real AI? Share it with us, we’re always looking for thought-provoking material. After all, thats the point of Serious Wonder isn’t it?