Weird Science - Towel Folding Robot

Judging by the huge response to what I thought was a fairly large and obscure post about a tiny coincidence, the Hitchhiker's Guide and cutting-edge science is obviously a winning combination.

So here is a super special Douglas Adams bonus, a robot folding towels! Okay so that's a bit of a stretch, but it is still quite cool.

Note that this video has been speeded up 50x, in real time it took the robot over an hour and a half to complete this one task. Perhaps it was feeling a little depressed?
     Posted By: Dumbfounded - Sat Apr 17, 2010
     Category: Boredom | Futurism | Inventions | Robots | Science | Experiments | Technology





Comments
Some people may think that a robot doing a task as mundane as this is nothing to get exited about, but despite the amount of time it takes, this really is quite an amazing feat. You can toss a pile of towels that aren't marked in any way and the robot can figure out how to fold them and even stack them according to size. I once read a book that was written in the 1980s that said a robot can perform complex calculations many times faster than any human but no robot would ever be able to tie its shoe. I always thought that sounded incredibly naïve, and I would bet it won't be too long before a robot can do that too.
Posted by Salamander Sam in Chicago on 04/17/10 at 07:37 PM
1.5 hours to fold 5 towels. That's about the same speed most of us men do it. Really! just ask my wife.

(Anyhow, it did it wrong!)
Posted by Expat47 in Athens, Greece on 04/18/10 at 12:26 AM
SalSam has it right -- it's a topological problem that is NOT easy to resolve (note the several twists it puts on the corners, once it has tentatively identified them, to prove it understands the geometry of the towels) -- We'll need to understand and refine these decision-making processes before we can launch a semi-autonomous submarine spacecraft to Europa and beyond.

But in my opinion, this li'l robot is a hoopy frood who REALLY knows where his towel is!
Posted by warrenwr on 04/19/10 at 01:20 AM
Questions remain: did it "know" how big the towels and washcloths were in advance, and how many of each to expect, and what if the first towel were made of terry-cloth and the second silk and the third canvas -- this is my argument for a manned presence in space. A machine only can respond to pre-programmed stimuli, which assumes humans are smarter than the universe around them. I love the WUverse because it constantly proves we are not only dumber than we imagine, we are dumber than we CAN imagine! (Apologies to Dr. Clarke)
Posted by warrenwr on 04/19/10 at 01:38 AM
I've been ordered to fold some towels. Here I am, brain the size of a planet and they ask me to fold some towels. Call that job satisfaction? 'Cos I don't.
Posted by DownCrisis on 04/19/10 at 09:41 AM
warrenwr -

The robot does not see the towels before they are given to it, and they are not limited to a certain size or material either. That is, after all, why people use the term "artificial intelligence" (though no one is implying that this robot is sentient). I'm not sure what that has to do with space exploration, though.

DownCrisis -

Thank you! It's been a couple years since the last time I read the Guide and I couldn't remember how this post related.
Posted by Salamander Sam in Chicago on 04/19/10 at 09:54 AM
:lol: DownCrisis, you hooptiously drangle me! Well put!

SalSam: this semi-autonomous robotic activity will be essential on distant locations like Europa, where it takes many minutes to hours to send information home, have it interpreted and judged, then decisions sent back. A mostly-autonomous robot will be essential -- remember, under the ice there won't be any solar power to recharge batteries, so the brave little toaster won't live long. These are "baby steps" -- and actually, pretty impressive. (I suppose there will be a "base" module, the "lander", and it might have solar arrays, but as far out as Jupiter it's not going to get much good from the sunlight available. And how much exploration will the submarine-bot do if it has to limit itself to an umbilical cable?) My guess: the bot will melt through the ice, dragging a power cable with a transceiver until it hits open water, then it'll cut loose and start exploring on its own, batteries only. Note that we could be looking at KILOMETERS of ice and an ocean of many times the volume of Earth's oceans beneath that -- it's a crap shoot, and a robot that knows how to fold towels by itself is a step in the right direction. My guess: a sonar gizmo, LED/camera/microscope, hydrophone, chemical tongue (environmental and biological), thermal sensors, some rudimentary propulsion and positioning system, no robotic arm, and a non-toxic capacitance-discharge power cell instead of a battery. No nukes this time.
Posted by warrenwr on 04/20/10 at 07:57 AM
And actually, the propulsion system is gravy. A bathyspherical deep diver with two or three "scoops" to act as a drogue to keep it upright and slow its descent as it sinks toward the core, taking pix of the stuff funneling through the scoops, would be enough for this first time out. Maybe a single drogue over the top would be adequate, but there's the risk of heat contamination of samples flowing around the bathysphere. And if something like a jellyfish clogged the scoop/drogue -- well, that'd answer a big question, wouldn't it? A towel-folder might know to let it loose after they shook hands, the critter might even survive to tell its offspring about being abducted by aliens!
Posted by warrenwr on 04/20/10 at 08:25 AM
@ warrenwr

I understand what you meant about autonomous space probes to survey other planets, I just didn't know what you meant when you said that was your argument for manned space exploration. As much as I want to see people visiting other planets, I just can't see manned missions to Europa (at least until we make friends with the fish people who live under the ice), and for things like that a robot is the only answer, even if it isn't autonomous. Of course, by the time NASA does finally get around to Europa missions, robots will probably have reached "intelligence" levels of smart animals or even children, so they should be even better equipped for the job.

By the way, when you said "no nukes," were you referring to nuclear power packs like the ones used on the Voyager probes? Plutonium can be messy, but with the first fusion reactor going on line recently, that's another problem that could be fixed by the time we send a probe to Europa.
Posted by Salamander Sam in Chicago on 04/20/10 at 01:31 PM
Oh, face-palm, sorry, I was distracted. For less obviously hostile environments -- say an asteroid mission or the moon or Mars, however good a robot is it won't be able to adapt instantly and instinctively to rapidly changing circumstances. Remember Apollo 17 when Harrison Schmitt and Gene Cernan discovered the famous "orange dirt" -- at first they thought it was a reflection off the gold mylar insulation, but as they stirred it up they started getting really excited. A robotic explorer would have probably missed it entirely. And that's my point ... wherever possible, a manned presence is better. This would obviously exclude Europa any time in the forseeable future. And the "no nukes" idea may be overridden with improvements in technology, but I'm betting that RTGs (radioactive thermal generators) such as are aboard the Voyagers wouldn't be allowed on early Europa missions for fear of possibly contaminating hypothetical life forms there -- and rightly so. The Galileo spacecraft was ordered to self-destruct as it neared the end of its controllable service life lest it contaminate Europa accidentally, I would bet any probe specifically designed to land there will probably be the most sterile object ever created by man, and the least potentially damaging for as long as possible. (A solid-state ceramic spacecraft? It's remotely possible, no pun intended.)

But until we get a viable AI system, which we are obviously nowhere near, the most bang for the buck is a manned presence. I think this hoopy bot is a remarkable step forward, but it's like Jeff Goldblum's "Seth Brundle" character in the remake of "The Fly" said ... the AI system would have to be "crazy", and we can barely make one that's "sane" now. And by "crazy" I don't mean "HAL" crazy, I mean obsessively curious and responsive to unanticipated stimuli -- orange dirt, e.g.

By the way, hi, I'm Bill. Ex-Boeing computer animator, now full-time pro freelance science fiction illustrator. I used to deal with the folks who thought this stuff up and a lot of their thinking rubbed off onto me. I'm not an expert, but I know enough about a lot of subjects to be dangerous. The JIMO (Jupiter Icy Moons Orbiter) was 86'd after Galileo for fear of contaminating Europa with RTGs we might lose control of, and the post-Challenger world isn't thrilled at the idea of nukes in space until we get a very serious handle on the reliability of the launch vehicles and containment systems.

But we should meet off-site for this kind of talk -- I'm here for the "crazy" that Chuck and Paul and you and Patty (hi, sweetie!) deliver with such delightful regularity.
Posted by warrenwr on 04/20/10 at 03:10 PM
hi yourself honey! i don't pretend to 'get' all the tech stuff you guys talk about but i still like reading it. i'd hate to miss out on the education by osmosis i get by rubbing elbows with all the brilliant people who comment here. question, doesn't anyone fear the capabilities of intellegence without conscience? look at what sociopaths do, murder and torture of other humans with no more empathy than if it was a bug they were stomping. i know someone will say i watch too much tv and too many movies but intellegence and autonomy of action without empathy or conscience is a frightening thing. i know we can't put the genie back in the bottle now, but i sure hope the powers that be are mindful of all sides of this equation.
Posted by Patty in Ohio, USA on 04/20/10 at 09:00 PM
Patty -

This may get too confusing but I hope this clears things up. There is a difference between artificial intelligence and intelligence. True intelligence implies sentience, or self awareness, something which a machine could probably never be capable of (though there have been endless philosophical debates on the subject). The thing is, artificial intelligence is a simulation of intelligence, so once you get into advanced artificial intelligence there might not be any way to tell whether or not it is sentient, since it would be programmed to appear sentient. An artificial intelligence is a program that has been programmed to act like a person, to respond to questions and conversation like a person would, and ideally to learn by experience (think Data from Star Trek). In the end, however, it is still just a simulation made up of lines of code, in the same way that a computer animation is just a simulation of a physical object. Luckily this means that an artificial intelligence wouldn't be sociopathic by nature, but bad programming can still cause problems when an artificial intelligence has the ability to interpret and learn (for example, HAL 9000. He was given conflicting orders to tell the crew everything he knew in order to help the mission, but also keep the true nature of the mission seriously. The logical solution was to get rid of the crew so he could keep the secret without hiding it from them). We still have a long way to go before artificial intelligence technology gets anywhere near that level. If you have ever played with one of those automated instant messaging programs you will see that while they can respond to certain questions and statements they are still basic software programs and can't be mistaken for real people. There is a test of artificial intelligence called a Turing test, in which a panel of human judges communicate anonymously with a computer and a human and then try to tell which is the computer. So far no artificial intelligence programs have even come close. If and when that day comes, however, the debate over whether computers can become sentient will be bigger than ever.
Posted by Salamander Sam in Chicago on 04/20/10 at 10:02 PM
thank you sam, that helps. we are a long way from my concern anyway. if there are 2 means to an end and one accomplishes the objective more quickly and efficiently but harms someone,but the second best solution works less efficiently without harming anyone then number 2 becomes number 1. as long as the value of humanity is factored in. it becomes, in my mind, like getting stuck in a machine after it is switched on with no way to stop the cycle (think industrial accident). the ability to shut it down in an emergency (as in, faulty ai logic causing unexpected harm) has to be factored in at each level. too often we become complacent in our certainty that all eventualities have been handled, so safety nets aren't in place.(think titanic)just thinking aloud. 😊
Posted by Patty in Ohio, USA on 04/20/10 at 10:58 PM
Patty, please continue to think aloud! We are all contributing to the development of AI by asking questions to which there are as of yet no answers. A good friend of mine brought to my attention the work of one Lawrence Kohlberg, which should be incorporated into any artificial intelligence program that attempts to pass the Turing test -- Kohlberg posits that there are specifically defined "levels" of behaviour that describe human motives, and although there are conflicting interpretations of his work, a very interesting phenomenon seems to be true: Someone who operates on one of these levels can understand the next level up, but not the level above that. And while attempting to communicate with any level below, success is only achieved in the next-lower level. This may be a bit technical, but it's fascinating -- http://en.wikipedia.org/wiki/Kohlberg's_stages_of_moral_development
Posted by warrenwr on 04/21/10 at 02:45 AM
bill that is fascinating. the issue may end up being, at what moral level does the human doing the programing reside.
Posted by Patty in Ohio, USA on 04/21/10 at 06:32 AM
They're geeks remember? Sadly Kohlberg's thesis was written before the rise of the internet so he neglected to put "will it make hot chicks go out with me?", "is there porn involved?" and "would it be cool?" as stages of moral development; I expect the first AIs will struggle to reach stage 1. 😉

I think the weakness in Kohlberg's thinking is that he assumes that the stages of moral development he can perceive are all there are. There may be levels of morality above and below the classical six that we are not aware of, or perhaps even incapable of being aware of. When intelligent machines arrive, they may be capable of modes of thought unknowable to us, e.g. existing as gestalts or hive minds, so may also have a moral outlook that is equally alien. Such effects are also likely to be emergeant and are not guaranteed to have our best interests at heart, whatever their programs say.
Posted by Dumbfounded on 04/21/10 at 08:58 AM
thank you dumbfounded, that is what i was trying to say. you are so good at finding the right words to make a point. i often find myself stumbling around trying to express my views, many times failing miserably at the task. once again you are the greatest sweetie.
Posted by Patty in Ohio, USA on 04/21/10 at 10:01 AM
ROTFL, Dumbfounded! What Patty said -- very erudite!
Posted by warrenwr on 04/21/10 at 11:28 AM
I guess it's never too early to create the Turing Police.  Sign me up!  I'll even change my name to Deckard.

"The tortoise lays on its back, its belly baking in the hot sun, beating its legs, trying to turn itself over but it can't.  Not without your help.  But you're not helping."
Posted by DownCrisis on 04/21/10 at 12:48 PM
"What's a tortoise?"
Posted by Salamander Sam in Chicago on 04/21/10 at 12:50 PM
Page 1 of 2 pages  1 2 > 
Commenting is not available in this channel entry.