If you know what the Turing test is, skip the next two paras.
AI - Artificial Intelligence - the phrase is thrown around a lot, and loads of sci-fi movies have come off it. I am very excited about intelligent machines too, ones that can learn to do stuff all by themselves without having to be precisely programmed (I'd then be jobless, but that wouldn't matter when the robots take over the world anyway). For some reason, a lot of people seem to be really interested in machines that can pretend to be human - screw the machine that can cure cancer, let's build one we can have a touchy-feely chat with. I'm not saying human-like robots are useless, I'm just saying that's not what we should focus on. Okay, I was just messing around. There are good reasons that people are interested in robots that think and act like us:
1) Human intelligence is the only kind of intelligence we know anything about,
2) If we want robots working with/for us, we need them to be able to understand us, and
3) If we can build robots that can think like us, then we learn a lot about our own intelligence and consciousness.
When there's a demand for something, we all know someone would start a pyramid scheme that scams people with a phony product. People want human-like robots, someone comes along and produces a chat-bot and claims it thinks like a person- when it doesn't, they say it thinks like a stupid person (or, only wise people can understand this robot- the emperor's clothes approach to scamming people). This raises an important question. How do we tell if a robot's really intelligent (human-like)? One popular test is the Turing test. The test involves two participants, a person and a supposedly intelligent computer, and an interrogator who has to identify the two. The interrogator has to do it from a simple Q&A session over a web chat (without video or audio). The human is honest and tries to convince the judge he's human, while the computer has to lie so the interrogator is convinced it's also human.
There are some issues with the Turing test though. The biggest one I have is, why anthromorphise intelligence? Why suppose that something has to think like a human for it to be intelligent? Suppose an alien species builds spaceships that can bring them to Earth. We make them take the Turing test for fun, and they fail it. Does that mean this species isn't intelligent, even after they mastered interstellar travel while we can barely explore our own solar system? No, that's stupid. The Turing test places too much emphasis on acting like a human.
I have been thinking about a different way to go at this. The doggo version. It's the same as the standard Turing test, except the computer now simulates the behaviour of dogs instead of people. Clearly, this can't be a Q&A thing. This is what I imagine the test should be:
There's two rooms behind two identical doors. One room has our participant computer which simulates the behaviour of dogs and renders it in video, while the other has a few dogs in it. The room with the dogs has enough stuff to keep the dogs entertained- toys to play with, furniture to destroy, food and water dispensers, and alarms and TVs to distract them. Of course, there's multiple dogs so they can also play. There's a bunch of cameras in the room that record what the dogs are doing. Outside the rooms, there's a control panel with a bunch of switches that control what goes on in the room- ring an alarm, dispense food, turn on water sprinklers, and so on. The interrogator outside the room, who is an expert dog-handler, has access to the control panel and the videos, and, as with the classic test, they have to figure out which one's the computer and which one's the dogs. Just to ensure that video quality doesn't influence the decisions, we let the video from the room with the dogs go through some software so that videos from both the rooms are of the same quality, graphics-wise.
And that is the doggo version of the Turing test. This version still tests for intelligence. Dogs are intelligent, I'm sure we can all agree on that. If a computer passes the test, it means it can successfully simulate intelligent behaviour, which, some would argue is indistinguishable from actual intelligence. This test would be superior to the standard test because it places the computer on equal footing to the interrogator. When the other participant is a human, there are a lot of subtle things about conversation that might just be social conditioning and have nothing to do with intelligence. With dogs, there isn't as much conditioning, at least not enough to throw off an intelligent computer.
AI - Artificial Intelligence - the phrase is thrown around a lot, and loads of sci-fi movies have come off it. I am very excited about intelligent machines too, ones that can learn to do stuff all by themselves without having to be precisely programmed (I'd then be jobless, but that wouldn't matter when the robots take over the world anyway). For some reason, a lot of people seem to be really interested in machines that can pretend to be human - screw the machine that can cure cancer, let's build one we can have a touchy-feely chat with. I'm not saying human-like robots are useless, I'm just saying that's not what we should focus on. Okay, I was just messing around. There are good reasons that people are interested in robots that think and act like us:
1) Human intelligence is the only kind of intelligence we know anything about,
2) If we want robots working with/for us, we need them to be able to understand us, and
3) If we can build robots that can think like us, then we learn a lot about our own intelligence and consciousness.
When there's a demand for something, we all know someone would start a pyramid scheme that scams people with a phony product. People want human-like robots, someone comes along and produces a chat-bot and claims it thinks like a person- when it doesn't, they say it thinks like a stupid person (or, only wise people can understand this robot- the emperor's clothes approach to scamming people). This raises an important question. How do we tell if a robot's really intelligent (human-like)? One popular test is the Turing test. The test involves two participants, a person and a supposedly intelligent computer, and an interrogator who has to identify the two. The interrogator has to do it from a simple Q&A session over a web chat (without video or audio). The human is honest and tries to convince the judge he's human, while the computer has to lie so the interrogator is convinced it's also human.
There are some issues with the Turing test though. The biggest one I have is, why anthromorphise intelligence? Why suppose that something has to think like a human for it to be intelligent? Suppose an alien species builds spaceships that can bring them to Earth. We make them take the Turing test for fun, and they fail it. Does that mean this species isn't intelligent, even after they mastered interstellar travel while we can barely explore our own solar system? No, that's stupid. The Turing test places too much emphasis on acting like a human.
I have been thinking about a different way to go at this. The doggo version. It's the same as the standard Turing test, except the computer now simulates the behaviour of dogs instead of people. Clearly, this can't be a Q&A thing. This is what I imagine the test should be:
There's two rooms behind two identical doors. One room has our participant computer which simulates the behaviour of dogs and renders it in video, while the other has a few dogs in it. The room with the dogs has enough stuff to keep the dogs entertained- toys to play with, furniture to destroy, food and water dispensers, and alarms and TVs to distract them. Of course, there's multiple dogs so they can also play. There's a bunch of cameras in the room that record what the dogs are doing. Outside the rooms, there's a control panel with a bunch of switches that control what goes on in the room- ring an alarm, dispense food, turn on water sprinklers, and so on. The interrogator outside the room, who is an expert dog-handler, has access to the control panel and the videos, and, as with the classic test, they have to figure out which one's the computer and which one's the dogs. Just to ensure that video quality doesn't influence the decisions, we let the video from the room with the dogs go through some software so that videos from both the rooms are of the same quality, graphics-wise.
And that is the doggo version of the Turing test. This version still tests for intelligence. Dogs are intelligent, I'm sure we can all agree on that. If a computer passes the test, it means it can successfully simulate intelligent behaviour, which, some would argue is indistinguishable from actual intelligence. This test would be superior to the standard test because it places the computer on equal footing to the interrogator. When the other participant is a human, there are a lot of subtle things about conversation that might just be social conditioning and have nothing to do with intelligence. With dogs, there isn't as much conditioning, at least not enough to throw off an intelligent computer.