A wonderful TedX talk by Ken Goldberg, Berkeley 2012:
search for the origins of humanity, meeting one’s maker, and discovering why we are here: Ridley Scott’s latest film Prometheus tackles some big themes. But arguably the most interesting one surrounds the issue of what it is to be human, raised in the form of the android David.
Both Alien and its sequel Aliens, which Prometheus is said to be a prequel to (although Ridley Scott has disputed this, only conceding that the films all inhabit the same universe), included androids in their crew.
But in Prometheus, the android’s story is shifted more to centre, focusing on what defines humanity, and whether a robot can ever hope to achieve it.
Finally, a robot film worth seeing for the reality and not the fantasy. Robot and Frank, one of the hits at the Sundance Film Festival will be the headline act at the Robot Film Festival 2012 July 14-15 in New York City. But even if you can’t be in New York, you’ll be able to see Robot and Frank around the country in August. The film is about the relationship between an ageing burglar dealing with dementia, his family, and the health care robot that his family force Frank to have. Frank puts up quite a fight. This crowd pleasing film by new director Jake Schreier has picked up a distribution deal with Sony Pictures Worldwide Acquisitions and Samuel Goldwyn Films.
The partnership bought U.S. and North American distibution rights for the film that stars Frank Langella and Susan Sarandon, with James Marsden, Liv Tyler and voice work by Peter Saarsgard. Sony also acquired distribution rights for Latin America, Australia, New Zealand, South Africa, Scandinavia and Eastern Europe, according to a company statement released Wednesday afternoon. According to The Hollywood Reporter, the deal is valued at just over $2 million. 
I can’t wait for Robot and Frank to be seen more widely. I saw the film recently at the San Francisco International Film Festival and I think it’s the first film to showcase plausible and pragmatic human-robot interaction. While the robot itself is unrealistic, the emotional interactions of the people around it are definitely real. Our future holds many devices dedicated to our wellbeing and what we choose to do with them will probably differ from the ‘instruction manual’.
A mesmerizing performance by the versatile theater veteran Frank Langella (as Frank) is ably supported by costars like Susan Sarandon (as a librarian Frank has a crush on), Liv Tyler and James Marsden (as Frank’s meddling children). The robot is voiced by Peter Sarsgaard, although the voice was done completely separately to the rest of the acting, in order to achieve a ‘mechanical’ tone.
Initially director Jake Shreier wanted someone to read the robot dialogue during filming, to give Frank Langella something to work off, as the actress inside the robot suit couldn’t read lines on top of making the robot suit move properly. But Langella preferred to do the dialogue one sided. Shreier extols Langella’s incredible virtuosity as an actor, his ability to remember and build on every gesture in each take.
You might be forgiven for thinking that the movie was written for Frank Langella, but the writer, Christopher D. Ford was a film school buddy of Shreier’s. Originally, the film was Ford’s graduation short almost 10 years ago, inspired by the initiatives in Japan to build elder care robots.
“Jake’s film shows us a future that is right around the corner, and I for one, can’t wait for my own robot,” said Sony official Joe Matukewicz, referring to first-time director Jake Schreier. Added Meyer Gottlieb, President of Samuel Goldwyn Films: “Our team fell in love with this clever, irreverent story anchored by Frank Langella’s indelible performance.
Langella’s performance is so terrific, in fact, that it’s easy to assume the role of Frank, a crusty, but charming former burgler, who calls himself a “second-story man” was written for Langella. Frank’s son presents him with a robot to serve as a health care aide. At first Frank is disgusted with himself for talking to “an appliance,” but soon begins to teach the robot how to pick locks.
The film began as a film-school short, inspired by an NPR radio report about a Japanese initiative to create robots that could care for the elderly, said screenwriter Christopher Ford. It was filmed in 20 days in upstate New York last summer. 
Perhaps the wonder is that no one has made a similar film yet. At the SFIF screening, Jake Schreier talked about the 10 years that it took to go from student film project to making a feature debut, and his fears that some one else would beat him to it.
Robot and Frank start a public discussion about human-robot interaction that is incredibly constructive and realistic. We are entering a future where, as Slate said, we may find it easier to love machines programmed to help us than our family who seem programmed to irritate. Not that Robot and Frank is a love story either! 
frog’s Creative Director, Scott Jenson, first UI designer at Apple and recent head of UX at Google mobile, recently blogged about smart devices and how that changes the design process. this is relevant to the very real near future of robotics. I’m continuing the zeitgeist sampling here.
Triumph of the Mundane
By Scott Jenson – April 18, 2012
Smart devices require a significant shift in thinking
This blog explores how to design smart devices. But these new devices are just so new and require such new insights, that our quaint, old school notions of UX design are completely blinding us. We are stuck between the classic paradigm of desktop computers, and the futuristic fantasy of smart dust. The world is either fastidious or fantastic. The path ahead is hard to see. Alan Kay said the best way to predict the future is to invent it… but what if we don’t know what we want?
Coffee Maker Syndrome
I’ve long proposed just-in-time interaction as a core approach to smart devices but while presenting on this topic over the past year, it has astounded me that people have such a hard time just thinking about the overall landscape of smart devices. Take for example this tweet:
Overheard at #CES: “I’m pretty sure my coffee maker doesn’t NEED apps.
On the face of it, this makes perfect sense. It seems unlikely you’ll be reading your email on your coffee maker. But this dismissive approach is an example of what Jake Dunagan has called “the crackpot realism of the present”. We are so entrenched in our current reality that we dismiss any exploration of new ideas. By stating that apps on a coffee maker would be silly (which is true), we easily dismiss any discussion of other potential visions of functionality.
When television was first introduced, the earliest programs were literally radio scripts read aloud in front of the camera. Radio had been dominant for decades so broadcasters just coasted into TV without thinking creatively about how to approach the medium differently. As Marshall McLuhan said, “We look at the present through a rearview mirror; we walk backwards into the future.”
Smart devices require three big shifts
Assuming that smart devices require apps is like walking backwards into the future. We don’t need our smart devices to run Microsoft office, we just need to them to, say, log their electrical usage (internal, invisible functionality) or give us quick how-to videos (simple user facing functionality).
If we want to properly discuss how to design smart devices, we must appreciate how they shift away from standard computers in three significant ways: micro functionality, liberated interaction, and a clustered ecosystem.
Shift 1: Micro Functionality
In my last post I discussed a fundamental UX axiom that Value must be greater than Pain. This handy little axiom implies many useful theorems. The most radical is that if pain gets very low, the actual value can also be low. While expensive tablets demand significant functional apps, cheap processing allows for more humble micro functionality. It’s one of the biggest hurdles that people have in discussing smart devices. They are so entrenched in the PC paradigm that they assume every device with a CPU must be bristling with functionality.
However, simple doesn’t equate to useless. For example, whenever I offer up the possibility of a ‘smart toaster’ people often chuckle; it’s the coffee maker syndrome all over. But there are lots of small and even fun things a toaster could do: log it’s electrical usage, offer up an instructional video on how to clean the crumb tray, report any diagnostic errors, call the support line for you, or even tweak it’s ‘toast is done’ sound. All of these are fairly trivial but are still useful if a) the value is genuine and b) the cost of adding the functionality is small. $600 tablets must to do quite a bit but this isn’t true for a $40 toaster.
The biggest impact of micro functionality is in how little real interactive power is required. So often when I talk of using HTML5 as the lingua franca of smart devices, people trot out the ‘it can’t do X’ argument, extolling the superiority of native apps. But micro functionality is so basic and simple that HTML5 is more than adequate: you’ll normally only need to view or change a simple value. Micro functionality only requires micro expressivity.
Shift 2: Liberated Interaction
Remember that Value must be > Pain. Micro functionality requires micro pain to be viable. No one is going to argue with their toaster; this type of functionality has to be quick, fast, and easy. Unfortunately, the trend today is that any device with functionality will usually have a tiny display, tinier buttons, a complex user manual, and a tech support line.
Smart devices need to be liberated from being solely responsible for all interaction. I’ve written previously about just-in-time interaction which allows any smart display (a phone, tablet, TV, interactive goggles, and yes, a laptop) to interact with a smart device. Using a significantly more capable device is so much better than cobbling together a cheap LCD display with fiddly little buttons on the device itself. A generation raised on rich phone interaction will expect, even demand better.
Moving interaction to smart displays also has a huge benefit for manufacturers. The cost of computation will likely be the least of a manufacturer’s concerns. Small displays, buttons, complex instruction manuals, and tech support lines are all very expensive. What if manufacturers could assume that any smart device they built would have free access to a big interactive color screen? Not only that but it would have excellent computing power, smooth animated graphics, a robust programming environment and to top it off a universally accepted consumer interaction model that didn’t require any training? Using these displays would allow enormous cost reductions, not only in parts costs, but in simpler development costs as well.
Shift 3: A Clustered Ecosystem
Once we liberate the interaction from the device, we’ve unlocked its functionality across many locations. Not only can I use my phone in front of my thermostat but also with my TV across the room and with my laptop at work across the city. By liberating functionality from devices, we liberate our ability to use these devices from anywhere. My previous post was called “Mobile Apps must die” not because apps will actually die (they will always have a role) but the shortsighted desire to funnel functionality exclusively through them must stop. If these very simple apps are written in HTML5, they can be used across any device which is very powerful indeed.
It is inevitable that devices will ship with interactivity built in. But as more devices become functional, it’s going to become overwhelming to have each device be it’s own island. The three shifts discussed here: micro functionality, liberated interaction, and a clustered ecosystem all point to a new pattern of thinking: small devices with small functionality that all work together in profound ways. This is a triumph of the mundane; a challenge to our PC soaked way of thinking.
But this new approach requires an open standard that all devices would use to announce their presence and offer their functionality in a universal language like HTML. In many ways we are at the cusp of a new hardware era of discoverability much like the web first had in the 80s.
What’s holding smart devices back is our oh-so-human ability to misunderstand their potential. These three shifts are a big step in understanding what we need to do. Let’s be clear, this is not technically challenging! We just need to understand what we want. Alan Kay is right: we have to invent our future. frog, where I work, is just starting to build simple prototypes to validate these concepts. As they mature, I’ll be sharing more information about them. It’s clear that technology is not the limiting factor, it’s just our desire to imagine a different future.
blog post in the wild at frog design’s designmind
There is a wave of excitement about the very real future of robotics, which is coming very soon. I’m posting some of the zeitgeist here.
from Wired Magazine.
A longtime technology forecaster, Saffo is a managing director at the Silicon Valley investment research firm Discern. Formerly the director of the Institute for the Future, he is also a consulting professor in Stanford University’s engineering department.
The second indicator is an inversion, where you see something that’s out of place. When the Mexican police captured the head of a drug cartel, in the photos the perpetrators were looking proudly at the camera while the cops were wearing ski masks. Usually it’s the reverse. To me that was an indicator that Mexico was very far from winning its war against the cartels.There are four indicators I look for: contradictions, inversions, oddities, and coincidences. In 2007 stock prices and gold prices were both soaring. Usually you don’t see those prices high at the same time. When you see a contradiction like that, it means more fundamental change is ahead.
Then there are oddities. When the Roomba robot vacuum was introduced in 2002, all the engineers I know were very excited, and I don’t recall them owning vacuums. I said, this is damn strange. This is not about cleaning floors, this is about scratching some kind of itch. It’s about something happening with robots.
Finally, there are coincidences. At the fourth Darpa Grand Challenge in 2007, a bunch of robots successfully drove in a simulated suburb. The same day, there was a 118-car pileup on a California highway. We had robots that understand the California vehicle code better than humans, and a bunch of humans crashing into each other. That said to me, really, people shouldn’t drive.