Finally, a robot film worth seeing for the reality and not the fantasy. Robot and Frank, one of the hits at the Sundance Film Festival will be the headline act at the Robot Film Festival 2012 July 14-15 in New York City. But even if you can’t be in New York, you’ll be able to see Robot and Frank around the country in August. The film is about the relationship between an ageing burglar dealing with dementia, his family, and the health care robot that his family force Frank to have. Frank puts up quite a fight. This crowd pleasing film by new director Jake Schreier has picked up a distribution deal with Sony Pictures Worldwide Acquisitions and Samuel Goldwyn Films.
The partnership bought U.S. and North American distibution rights for the film that stars Frank Langella and Susan Sarandon, with James Marsden, Liv Tyler and voice work by Peter Saarsgard. Sony also acquired distribution rights for Latin America, Australia, New Zealand, South Africa, Scandinavia and Eastern Europe, according to a company statement released Wednesday afternoon. According to The Hollywood Reporter, the deal is valued at just over $2 million. 
I can’t wait for Robot and Frank to be seen more widely. I saw the film recently at the San Francisco International Film Festival and I think it’s the first film to showcase plausible and pragmatic human-robot interaction. While the robot itself is unrealistic, the emotional interactions of the people around it are definitely real. Our future holds many devices dedicated to our wellbeing and what we choose to do with them will probably differ from the ‘instruction manual’.
A mesmerizing performance by the versatile theater veteran Frank Langella (as Frank) is ably supported by costars like Susan Sarandon (as a librarian Frank has a crush on), Liv Tyler and James Marsden (as Frank’s meddling children). The robot is voiced by Peter Sarsgaard, although the voice was done completely separately to the rest of the acting, in order to achieve a ‘mechanical’ tone.
Initially director Jake Shreier wanted someone to read the robot dialogue during filming, to give Frank Langella something to work off, as the actress inside the robot suit couldn’t read lines on top of making the robot suit move properly. But Langella preferred to do the dialogue one sided. Shreier extols Langella’s incredible virtuosity as an actor, his ability to remember and build on every gesture in each take.
You might be forgiven for thinking that the movie was written for Frank Langella, but the writer, Christopher D. Ford was a film school buddy of Shreier’s. Originally, the film was Ford’s graduation short almost 10 years ago, inspired by the initiatives in Japan to build elder care robots.
“Jake’s film shows us a future that is right around the corner, and I for one, can’t wait for my own robot,” said Sony official Joe Matukewicz, referring to first-time director Jake Schreier. Added Meyer Gottlieb, President of Samuel Goldwyn Films: “Our team fell in love with this clever, irreverent story anchored by Frank Langella’s indelible performance.
Langella’s performance is so terrific, in fact, that it’s easy to assume the role of Frank, a crusty, but charming former burgler, who calls himself a “second-story man” was written for Langella. Frank’s son presents him with a robot to serve as a health care aide. At first Frank is disgusted with himself for talking to “an appliance,” but soon begins to teach the robot how to pick locks.
The film began as a film-school short, inspired by an NPR radio report about a Japanese initiative to create robots that could care for the elderly, said screenwriter Christopher Ford. It was filmed in 20 days in upstate New York last summer. 
Perhaps the wonder is that no one has made a similar film yet. At the SFIF screening, Jake Schreier talked about the 10 years that it took to go from student film project to making a feature debut, and his fears that some one else would beat him to it.
Robot and Frank start a public discussion about human-robot interaction that is incredibly constructive and realistic. We are entering a future where, as Slate said, we may find it easier to love machines programmed to help us than our family who seem programmed to irritate. Not that Robot and Frank is a love story either! 
frog’s Creative Director, Scott Jenson, first UI designer at Apple and recent head of UX at Google mobile, recently blogged about smart devices and how that changes the design process. this is relevant to the very real near future of robotics. I’m continuing the zeitgeist sampling here.
Triumph of the Mundane
By Scott Jenson - April 18, 2012
Smart devices require a significant shift in thinking
This blog explores how to design smart devices. But these new devices are just so new and require such new insights, that our quaint, old school notions of UX design are completely blinding us. We are stuck between the classic paradigm of desktop computers, and the futuristic fantasy of smart dust. The world is either fastidious or fantastic. The path ahead is hard to see. Alan Kay said the best way to predict the future is to invent it… but what if we don’t know what we want?
Coffee Maker Syndrome
I’ve long proposed just-in-time interaction as a core approach to smart devices but while presenting on this topic over the past year, it has astounded me that people have such a hard time just thinking about the overall landscape of smart devices. Take for example this tweet:
Overheard at #CES: “I’m pretty sure my coffee maker doesn’t NEED apps.
On the face of it, this makes perfect sense. It seems unlikely you’ll be reading your email on your coffee maker. But this dismissive approach is an example of what Jake Dunagan has called “the crackpot realism of the present”. We are so entrenched in our current reality that we dismiss any exploration of new ideas. By stating that apps on a coffee maker would be silly (which is true), we easily dismiss any discussion of other potential visions of functionality.
When television was first introduced, the earliest programs were literally radio scripts read aloud in front of the camera. Radio had been dominant for decades so broadcasters just coasted into TV without thinking creatively about how to approach the medium differently. As Marshall McLuhan said, “We look at the present through a rearview mirror; we walk backwards into the future.”
Smart devices require three big shifts
Assuming that smart devices require apps is like walking backwards into the future. We don’t need our smart devices to run Microsoft office, we just need to them to, say, log their electrical usage (internal, invisible functionality) or give us quick how-to videos (simple user facing functionality).
If we want to properly discuss how to design smart devices, we must appreciate how they shift away from standard computers in three significant ways: micro functionality, liberated interaction, and a clustered ecosystem.
Shift 1: Micro Functionality
In my last post I discussed a fundamental UX axiom that Value must be greater than Pain. This handy little axiom implies many useful theorems. The most radical is that if pain gets very low, the actual value can also be low. While expensive tablets demand significant functional apps, cheap processing allows for more humble micro functionality. It’s one of the biggest hurdles that people have in discussing smart devices. They are so entrenched in the PC paradigm that they assume every device with a CPU must be bristling with functionality.
However, simple doesn’t equate to useless. For example, whenever I offer up the possibility of a ‘smart toaster’ people often chuckle; it’s the coffee maker syndrome all over. But there are lots of small and even fun things a toaster could do: log it’s electrical usage, offer up an instructional video on how to clean the crumb tray, report any diagnostic errors, call the support line for you, or even tweak it’s ‘toast is done’ sound. All of these are fairly trivial but are still useful if a) the value is genuine and b) the cost of adding the functionality is small. $600 tablets must to do quite a bit but this isn’t true for a $40 toaster.
The biggest impact of micro functionality is in how little real interactive power is required. So often when I talk of using HTML5 as the lingua franca of smart devices, people trot out the ‘it can’t do X’ argument, extolling the superiority of native apps. But micro functionality is so basic and simple that HTML5 is more than adequate: you’ll normally only need to view or change a simple value. Micro functionality only requires micro expressivity.
Shift 2: Liberated Interaction
Remember that Value must be > Pain. Micro functionality requires micro pain to be viable. No one is going to argue with their toaster; this type of functionality has to be quick, fast, and easy. Unfortunately, the trend today is that any device with functionality will usually have a tiny display, tinier buttons, a complex user manual, and a tech support line.
Smart devices need to be liberated from being solely responsible for all interaction. I’ve written previously about just-in-time interaction which allows any smart display (a phone, tablet, TV, interactive goggles, and yes, a laptop) to interact with a smart device. Using a significantly more capable device is so much better than cobbling together a cheap LCD display with fiddly little buttons on the device itself. A generation raised on rich phone interaction will expect, even demand better.
Moving interaction to smart displays also has a huge benefit for manufacturers. The cost of computation will likely be the least of a manufacturer’s concerns. Small displays, buttons, complex instruction manuals, and tech support lines are all very expensive. What if manufacturers could assume that any smart device they built would have free access to a big interactive color screen? Not only that but it would have excellent computing power, smooth animated graphics, a robust programming environment and to top it off a universally accepted consumer interaction model that didn’t require any training? Using these displays would allow enormous cost reductions, not only in parts costs, but in simpler development costs as well.
Shift 3: A Clustered Ecosystem
Once we liberate the interaction from the device, we’ve unlocked its functionality across many locations. Not only can I use my phone in front of my thermostat but also with my TV across the room and with my laptop at work across the city. By liberating functionality from devices, we liberate our ability to use these devices from anywhere. My previous post was called “Mobile Apps must die” not because apps will actually die (they will always have a role) but the shortsighted desire to funnel functionality exclusively through them must stop. If these very simple apps are written in HTML5, they can be used across any device which is very powerful indeed.
It is inevitable that devices will ship with interactivity built in. But as more devices become functional, it’s going to become overwhelming to have each device be it’s own island. The three shifts discussed here: micro functionality, liberated interaction, and a clustered ecosystem all point to a new pattern of thinking: small devices with small functionality that all work together in profound ways. This is a triumph of the mundane; a challenge to our PC soaked way of thinking.
But this new approach requires an open standard that all devices would use to announce their presence and offer their functionality in a universal language like HTML. In many ways we are at the cusp of a new hardware era of discoverability much like the web first had in the 80s.
What’s holding smart devices back is our oh-so-human ability to misunderstand their potential. These three shifts are a big step in understanding what we need to do. Let’s be clear, this is not technically challenging! We just need to understand what we want. Alan Kay is right: we have to invent our future. frog, where I work, is just starting to build simple prototypes to validate these concepts. As they mature, I’ll be sharing more information about them. It’s clear that technology is not the limiting factor, it’s just our desire to imagine a different future.
blog post in the wild at frog design’s designmind
There is a wave of excitement about the very real future of robotics, which is coming very soon. I’m posting some of the zeitgeist here.
from Wired Magazine.
A longtime technology forecaster, Saffo is a managing director at the Silicon Valley investment research firm Discern. Formerly the director of the Institute for the Future, he is also a consulting professor in Stanford University’s engineering department.
The second indicator is an inversion, where you see something that’s out of place. When the Mexican police captured the head of a drug cartel, in the photos the perpetrators were looking proudly at the camera while the cops were wearing ski masks. Usually it’s the reverse. To me that was an indicator that Mexico was very far from winning its war against the cartels.There are four indicators I look for: contradictions, inversions, oddities, and coincidences. In 2007 stock prices and gold prices were both soaring. Usually you don’t see those prices high at the same time. When you see a contradiction like that, it means more fundamental change is ahead.
Then there are oddities. When the Roomba robot vacuum was introduced in 2002, all the engineers I know were very excited, and I don’t recall them owning vacuums. I said, this is damn strange. This is not about cleaning floors, this is about scratching some kind of itch. It’s about something happening with robots.
Finally, there are coincidences. At the fourth Darpa Grand Challenge in 2007, a bunch of robots successfully drove in a simulated suburb. The same day, there was a 118-car pileup on a California highway. We had robots that understand the California vehicle code better than humans, and a bunch of humans crashing into each other. That said to me, really, people shouldn’t drive.
Where is feminism when you need it? That question is currently being asked, and answered, in Silicon Valley. Whether it’s discussing the resurgence of sexism, finding the new flavors of feminist, cheer leading for all the fab women in tech and pushing for more women to join the geektrain; or whether it’s asking hard questions about how the heck we manage to be female in high power areas, the discussions are plentiful and the responses thoughtful. I’m collecting some here:
A great starting point in Silicon Valley is Women 2.0 . Founded in 2006 by Shaherose Charania and Angie Chang, Women 2.0 is a global network and social platform for influencers that drive trends and decisions — as startup founders and as consumers. Their mission is to inform, inspire and educate a new generation of females that are entrepreneurial and successful.
Unfortunately, the environment is still quite toxic to women in vast swathes of the tech world.
‘Gang Bang Interviews’ and ‘Bikini Shots’: Silicon Valley’s Brogrammer Problem by Tasneem Raja at Mother Jones is a curation of stories, articles and blog posts about the bros/hos culture. She includes the Geeklist incident (and others) but also labor statistics on women in tech and comments from the smart people who realize that alienating 50% of the workforce is short sighted.
But recruiter beware, warn some veteran observers: a bros-only atmosphere will hurt no one more than the startups that foster it. “We simply cannot afford to alienate large chunks of the workforce,” notes Dan Shapiro, a tech entrepreneur who sold his comparison-shopping company to Google and now works there as a product manager. Shapiro, who has blogged in the past about sexism in the tech industry, notes that “it is a widely understood truth that the single biggest challenge to a successful startup is attracting the right people. To literally handicap yourself by 50 percent is insanity.”
Why your next board member should be a woman? recent post on Tech Crunch by Aileen Lee. My own piece on ‘The opportunities for Robots, Startups and Women‘ at Women2.org , MEGA Startup Weekend and the Robot Launch Pad.
But while the tech culture is still male dominated, I believe that the onus is on men to speak up. When women tweet, blog or speak about sexism, the resulting flamewar is usually far greater than any positive gains. Example: Shanley Kane (Geeklist/Twitter), Kathy Sierra (shut down blog in 2007), Rebecca Watson (Skepchick). This is one reason you don’t hear a lot of women complaining. Please don’t interpret silence as consent.
You might not hear the sighs or complaints, but you do hear a lot of appreciation from women if you speak in support of decent human behavior, (which makes sound financial sense too). Chris Yeh spoke out recently at MEGA Startup Weekend and took the heat. It matters, as Adria Richards, technology evangelist, replied in ‘Everyone Has a Voice When It Comes To Tech and Sexism’.
Once again, there is an international push for more women in technology. Tech Needs Girlsdescribes an international ‘road map for tech education and career changes’.
New York, 26 April 2012 – Global leaders from the US, Europe, Africa and Asia joined together today to debate and define a roadmap that will help break down barriers and overturn outmoded attitudes in a bid get more girls into technology-related studies and careers.
A high-level dialogue … identified misguided school-age career counselling, the popular media’s ‘geek’ image of the technology field, a dearth of inspirational female role models, and a lack of supportive frameworks in the home and workplace as factors that, together, tend to dissuade talented girls from pursuing a tech career.
Once again, the focus is on the pipeline. Getting girls into technology. But getting girls into the pipeline is no good if pond at the end of the pipeline is still poisoned.
As GeekDad’s guest writer Michael Eisen describes his 6 yr old daughter’s disappointment at being excluded from a massively geeky ‘father-son sweepstake’, he touches on the part of us that dies inside every time the world says ‘not for girls’ without even intending it to be a slap down. Whether it’s sexism, racism or some other ism creating division, we all need to speak up.
Today I listened to another person asking the world what sort of robot they should build. They had just told us the features that it would have and want to know what things people might use it for, so that they could find the money to build it. This is a completely back to front approach to robot building.
This is what Caroline Pantofaru, Leila Takayama, Tully Foote and Bianca Soto refer to as ‘technology push’ in their recent paper. An excerpt is quoted below, but I recommend reading the whole thing.
2. NEED FINDING
We present need finding as a methodological tool for quickly learning about a user space, inspiring robotics research within that space, and grounding the resulting research. Much (although certainly not all) of robotics research today is inspired by technology push – a technologist deciding to apply a technology to a problem. User-based research often does not start until after a prototype or system specification exists. This is a valuable method as researchers have spent years building intuition for their field.
For robotics research, need finding can provide a complementary, user-driven source of inspiration and guidance, as well as refining technology push ideas to better to an application space.
Need finding is a method that comes from the product design community . The goal is to identify a set of fundamental user needs of the community a product aims to satisfy. The need finding process is summarized in Figure 2.
Figure 2: An overview of the need nding process
Need finding begins with generating empathy for the user group through interviews, and sharing that empathy with other designers and researchers through conversations and media like videos. This is a concrete and analytic process.
The results from the interviews are then abstracted into frameworks, often presented in graphical form such as the 2×2 charts in Figures 2 and 3. The lessons from the frameworks are then converted to design implications, which are meant to be generative, allowing many interesting solutions to evolve. This process can be iterated and interleaved as necessary. The process is expanded upon below, with a description of our own implementation for this paper.
2.1 Interviews and Observations
Need nding begins with identifying a community of potential product users. A very small team goes out to visit a sample of community members in the places where they do the activities that the product supports. Immersion in the
interviewee’s environment is the key to success and a distinguishing feature of need finding. This immersion inspires the participant to discuss details they might have otherwise forgotten, and allows the interviewer to quickly reconcile the interviewee’s words with reality.
It is important to note that relying on self-reported data alone is dangerous due to people’s poor memories, as well as the social desirability bias (an inclination for respondents to say things they think will
be received favorably.) There is even a standardized scale for measuring a person’s inclination toward the social desirability bias . These problems can be so serious that some usability experts suggest completely ignoring what users say, and instead watching what they do