So, first of all, this post is based on this article in PopSci.
What are the social implications of humanoid robots.
And please don't limit your comments to this question, but expand above and beyond!
Wednesday, August 30, 2006
Subscribe to:
Post Comments (Atom)
13 comments:
what about the three laws of robotics. If we use them, then we shouldn't have to worry about any of that.
Asimov was a great science-fiction writer, but I'm not sure if we have an ability to program rules like that into a humanoid robot. It seems that if they really have human intelligence, then they would be able to overcome those, or choose not to obey them. If we can program them, then they would have the intelligence to reprogram themselves or other robots.
Another side to this argument would be that by creating a sentient or near-sentient being, that we would be "playing god". I am almost positive that Religious Right groups would attack the creator and producers like a starved shark smelling fresh blood. What do you think the repercussions would be if we created a new species that has never before existed on this planet, and wasn't intended by evolution to exist? I think there would be a significant impact on the biological ecosystems present on the planet already, as a robot wouldn't know how to react to it's environment while keeping it's environment intact. Anyway, thoughts?
Since programming is basically just writing a series of concrete tasks or instructions, I'm sure you can program things like decision making and sociability into a robot. A lot of things are just cause and effect, event and response. But how do you write a program for creativity?
In my mind, we are still a long, long way off from being able to create sentient robots. I feel that the more immediate problem is household robots, ones that can cook, clean, watch the kids.
As it is I don't think that it is possible to write in creativity, but if it were to be, it would have to be a set of rules that allowed the robot to change it's own program. I don't think that that would be possible. I think that a robot would have "artificial intelligence" which is just that, artificial. I think that "AI" just means a really complicated program, that, like abby said, is just a sreies of instructions, but those instructions are not always linear. The Roomba for example. It has no concrete path taht if follows as it cleans, however it is full capable of cleanign a whole room of any size because it finds the size of the room, then it's program tells it what the path should be based on some series of algorithms, that change it's path dependant on the room. If you were to program each possible room size into a roomba it would be a MASSIVE program, one that somethign the size of a roomba, not only does not have a large enough mamory to hold, nor a RAM capable of figuring out which one of those paths t would take.
If our House-Robot held the same guidelines, then it would not have true creativity, but a rough impersonation of it. It could figure that there is an object in it's way and move around it, in the best way possible, no matter what is actually in it's path.
I think that the largest limitations on this technology are religious extreamists, as devin pointed out, and having our robot be able to figure out new programs efficiently. Currently it would take huge quantities of writing to create a program to, for example, cook chicken. Our robot would need to be advanced enough to actually read and understand words that were writen on paper, or into a word file that was transferred into it. The number of problems in creating a robot complex enough to do that is, in my mind, the biggest hurdle we have to face right now.
I agree with both wolf and john on this subject. I do not believe it possible for robots to be created without rules, or be close to human in other words, simply due to the fact that we as humans grow up largely based on our experiences, and the way we react to them, robots would "grow up" with one purpose and goal in mind. If we wanted robots to mimic human qualities that would mean that we would not only have to give them the ability to use creative thinking, but they would also need emotions. Now I don't really see how that is possible right now, because, as wolf said, the sheer memory needed to make a robot do a much simpler task is just out of our reach. However if anyone has ever seen the movie I Robot, it brings up some interesting points. The Dr. in the movie suggests that every machine has a "ghost" in it, or random segments of code that group together which in turn form unexpected protocols.They could become unanticipated behaviours. These free radicals would turn into the idea of free will and creativity which would engender questions that such a thing could be the nature of a soul. This could mean that robots could naturally evolve. Back to TOK it seems to me if we had robots with rules they would simply rely on the logic side of the spectrum, which would make them different from a human being, how ever that doesn't mean change isnt possible, for sometimes it seems laws are made to be broken. Just some ideas however far fetched they may be.
I disagree with Noah's ideas. These programs that he mentions, such as the one that can design a boat, or the pet robo dogs that can learn, are still just following a set of instructions. The boat program merely finds the easiest way through a set of mathematical parameters; the robo dogs just take an input, run it through circuits, and produce a reaction. This is not true creativity. Robots will be able to react to external stimuli but will not be able to think of creative solutions to problems. Thus robots will not be able to reach real intelligence on a human level. They will simply be very adaptive computers.
I have to disagree. I think that eventually it will be possible for them to become creative, and think freely, it is just much further off than we imagine. The amount of proccessing power that would be required would be astronomical, and i don't think that true sentience can be achieved in an artificial mind wihtout sufficient storage, which as far as I can fathom, would require computers the size of the planet.
Computers with free will will be to big to be contained within a robot body, and may have robot "hands" but they would just be extensions of the whole, not intellegent by themselves. The only real task that I can see robots truely beign effective in is households as nurses, nannies, cooks, butlers etc.
They will be able to react to external stimuli, but unless they have a degree of creativity they couldn't find a way around it. To give an idea of the creative thinking involved in a simple task view this site. that is just in something that does not involve any movement, so imagine using something similar to be able to navigate a room, no matter the size, or condition, or objects inside, and imagine the kind of proccessing power that that would take, while still running input from simulations of three senses, and coordination the movemnt of walking.
To the earlier topic of the three rules, those would work with a limit, robots would be programmed not to harm anything, not just people. This would have to be hard memory, not something alterable like what we find on our modern hard drives, more like information on a CD, so a robot cannot overwrite it.
There are actually the evolving algorythms that have been used in computer programs. It creates a program that in many ways resembles change. It has a task to accomplish, and it finds a way to do the task, but not always in what we would call a logical manner.
I, for one, would like to express my horror at this situation. Humans will create a robot that can think, there's no doubt in my mind (after all, that's one less thing humans have to do. Think of all the nergy it take to think!), and once we do that it's just a hop skip and a jump until we destroy ourselves. Sure, you can all say that I've seen too many science-fiction movies, but I am a firm believer that we'll destroy ourselves, and robots are a very prominent option. I think we need to stop trying to make robots like humans, or any other living thing. We need to make robots, who can be robots. Of course, as I said, no matter how many times I or anyone else says this, we're going to make humanoid robots. At first this'll be great, then the robots will begin to notice that they're surrounded by idiots (i.e humans), and begin to think that they should be in power. Of course, we humans in our infinite wisdom, will also make war machines capable of thought. So, the war machines will team up with the thought machines and slowly take over the human race. All I have to say is: For god's sake Humanity, at least find a semi-livable planet before then.
P.S: Please check your spelling before publishing, it was really getting to my while reading.
I still don't think that robots will reach the same level of intelligence as humans. My creativity problem is still unanswered. Your site was very interesting, Wolf, but I would argue that it is not creativity. The "AI" simply runs the information which it has been given through a database and displays the "guess" with the most hits. The robots which can move through a room are not displaying creativity; they are merely following instructions that tell them how to respond to stimuli from sensors. Science-fiction stories about cyborgs taking over the world are just that - fiction. While I can picture humans using very complicated robots to kill eachother, robots will never be independent of humans enough to turn against them. They will remain essentially a tool.
I just thought of another interesting thing. You know how at the bottom of every post there's a gimpy, to prevent spambots from overtaking the site? That's an excellent example of how even simple human creativity can foil a robot. Even more, it takes human creativity to write a program capable of overcoming a captcha. Can anyone think of an example where a robot was "smarter" than a human, completely independently?
I'd like to point out that I beat the 20 questions website with "eggplant". I also beat the Darth Vader 20 questions website multiple times. This has absolutely nothing to do with the conversation (primarily because, No, I cannot think of an instance where a robot was smarter than an independant human with creativity etc. although I could make an argument that the general computer uses alot more "brain" power than a human ever could, and holds alot more information than a human ever could), but it makes me feel important.
That's because you are important. You're also incredibly smart, and your intellect is far superior to computers. Now quit talking to yourself. It's creepy.
Post a Comment