Robot And Frank

Finally, a robot film worth seeing for the reality and not the fantasy. Robot and Frank, one of the hits at the Sundance Film Festival will be the headline act at the Robot Film Festival 2012 July 14-15 in New York City. But even if you can’t be in New York, you’ll be able to see Robot and Frank around the country in August. The film is about the relationship between an ageing burglar dealing with dementia, his family, and the health care robot that his family force Frank to have. Frank puts up quite a fight. This crowd pleasing film by new director Jake Schreier has picked up a distribution deal with Sony Pictures Worldwide Acquisitions and Samuel Goldwyn Films.

The partnership bought U.S. and North American distibution rights for the film that stars Frank Langella and Susan Sarandon, with James Marsden, Liv Tyler and voice work by Peter Saarsgard. Sony also acquired distribution rights for Latin America, Australia, New Zealand, South Africa, Scandinavia and Eastern Europe, according to a company statement released Wednesday afternoon. According to The Hollywood Reporter, the deal is valued at just over $2 million. [1]

I can’t wait for Robot and Frank to be seen more widely. I saw the film recently at the San Francisco International Film Festival and I think it’s the first film to showcase plausible and pragmatic human-robot interaction. While the robot itself is unrealistic, the emotional interactions of the people around it are definitely real. Our future holds many devices dedicated to our wellbeing and what we choose to do with them will probably differ from the ‘instruction manual’.

A mesmerizing performance by the versatile theater veteran Frank Langella (as Frank) is ably supported by costars like Susan Sarandon (as a librarian Frank has a crush on), Liv Tyler and James Marsden (as Frank’s meddling children). The robot is voiced by Peter Sarsgaard, although the voice was done completely separately to the rest of the acting, in order to achieve a ‘mechanical’ tone.

Initially director Jake Shreier wanted someone to read the robot dialogue during filming, to give Frank Langella something to work off, as the actress inside the robot suit couldn’t read lines on top of making the robot suit move properly. But Langella preferred to do the dialogue one sided. Shreier extols Langella’s incredible virtuosity as an actor, his ability to remember and build on every gesture in each take.

You might be forgiven for thinking that the movie was written for Frank Langella, but the writer, Christopher D. Ford  was a film school buddy of Shreier’s. Originally, the film was Ford’s graduation short almost 10 years ago, inspired by the initiatives in Japan to build elder care robots.

“Jake’s film shows us a future that is right around the corner, and I for one, can’t wait for my own robot,” said Sony official Joe Matukewicz, referring to first-time director Jake Schreier. Added Meyer Gottlieb, President of Samuel Goldwyn Films: “Our team fell in love with this clever, irreverent story anchored by Frank Langella’s indelible performance.

Langella’s performance is so terrific, in fact, that it’s easy to assume the role of Frank, a crusty, but charming former burgler, who calls himself a “second-story man” was written for Langella. Frank’s son presents him with a robot to serve as a health care aide. At first Frank is disgusted with himself for talking to “an appliance,” but soon begins to teach the robot how to pick locks.

The film began as a film-school short, inspired by an NPR radio report about a Japanese initiative to create robots that could care for the elderly, said screenwriter Christopher Ford. It was filmed in 20 days in upstate New York last summer. [2]

Perhaps the wonder is that no one has made a similar film yet. At the SFIF screening, Jake Schreier talked about the 10 years that it took to go from student film project to making a feature debut, and his fears that some one else would beat him to it.

Robot and Frank start a public discussion about human-robot interaction that is incredibly constructive and realistic. We are entering a future where, as Slate said, we may find it easier to love machines programmed to help us than our family who seem programmed to irritate. Not that Robot and Frank is a love story either! [3]

  1. http://www.sltrib.com/sltrib/blogssundanceblog/53377292-50/frank-film-robot-langella.html.csp
  2. http://www.sltrib.com/sltrib/blogssundanceblog/53377292-50/frank-film-robot-langella.html.csp
  3. http://www.slate.com/blogs/browbeat/2012/01/24/robot_and_frank_a_great_sci_fi_buddy_heist_movie_about_old_age.html

The Future for Robotics (1)

There is a wave of excitement about the very real future of robotics, which is coming very soon. I’m posting some of the zeitgeist here.

from Wired Magazine.

Paul Saffo

A longtime technology forecaster, Saffo is a managing director at the Silicon Valley investment research firm Discern. Formerly the director of the Institute for the Future, he is also a consulting professor in Stanford University’s engineering department.


The second indicator is an inversion, where you see something that’s out of place. When the Mexican police captured the head of a drug cartel, in the photos the perpetrators were looking proudly at the camera while the cops were wearing ski masks. Usually it’s the reverse. To me that was an indicator that Mexico was very far from winning its war against the cartels.There are four indicators I look for: contradictions, inversions, oddities, and coincidences. In 2007 stock prices and gold prices were both soaring. Usually you don’t see those prices high at the same time. When you see a contradiction like that, it means more fundamental change is ahead.

Then there are oddities. When the Roomba robot vacuum was introduced in 2002, all the engineers I know were very excited, and I don’t recall them owning vacuums. I said, this is damn strange. This is not about cleaning floors, this is about scratching some kind of itch. It’s about something happening with robots.

Finally, there are coincidences. At the fourth Darpa Grand Challenge in 2007, a bunch of robots successfully drove in a simulated suburb. The same day, there was a 118-car pileup on a California highway. We had robots that understand the California vehicle code better than humans, and a bunch of humans crashing into each other. That said to me, really, people shouldn’t drive.

Finding a robot we need?

Today I listened to another person asking the world what sort of robot they should build. They had just told us the features that it would have and want to know what things people might use it for, so that they could find the money to build it. This is a completely back to front approach to robot building.

This is what Caroline Pantofaru, Leila Takayama, Tully Foote and Bianca Soto refer to as ‘technology push’ in their recent paper. An excerpt is quoted below, but I recommend reading the whole thing.

Exploring the Role of Robots in Home OrganizationPantofaru, Caroline., Takayama, Leila., Foote, Tully., and Soto, Bianca Proc. of Human-Robot Interaction (HRI), Boston, MA, p.327-334, (2012)

2. NEED FINDING
We present need finding as a methodological tool for quickly learning about a user space, inspiring robotics research within that space, and grounding the resulting research. Much (although certainly not all) of robotics research today is inspired by technology push – a technologist deciding to apply a technology to a problem. User-based research often does not start until after a prototype or system speci fication exists. This is a valuable method as researchers have spent years building intuition for their field.

For robotics research, need finding can provide a complementary, user-driven source of inspiration and guidance, as well as refi ning technology push ideas to better to an application space.

Need fi nding is a method that comes from the product design community [2]. The goal is to identify a set of fundamental user needs of the community a product aims to satisfy. The need finding process is summarized in Figure 2.


Figure 2: An overview of the need nding process

Need finding begins with generating empathy for the user group through interviews, and sharing that empathy with other designers and researchers through conversations and media like videos. This is a concrete and analytic process.

The results from the interviews are then abstracted into frameworks, often presented in graphical form such as the 2×2 charts in Figures 2 and 3. The lessons from the frameworks are then converted to design implications, which are meant to be generative, allowing many interesting solutions to evolve. This process can be iterated and interleaved as necessary. The process is expanded upon below, with a description of our own implementation for this paper.

2.1 Interviews and Observations
Need nding begins with identifying a community of potential product users. A very small team goes out to visit a sample of community members in the places where they do the activities that the product supports. Immersion in the
interviewee’s environment is the key to success and a distinguishing feature of need finding. This immersion inspires the participant to discuss details they might have otherwise forgotten, and allows the interviewer to quickly reconcile the interviewee’s words with reality.

It is important to note that relying on self-reported data alone is dangerous due to people’s poor memories, as well as the social desirability bias (an inclination for respondents to say things they think will
be received favorably.) There is even a standardized scale for measuring a person’s inclination toward the social desirability bias [5]. These problems can be so serious that some usability experts suggest completely ignoring what users say, and instead watching what they do [16]

BotSpot – A Venture into Robotic Artists

There’ve been some great robot projects on Kickstarter recently and BotSpot pushes the envelope further. I can see a heap of potential as both a new business and art form. Although, that’s like saying spray cans enabled a new art form. Many disagree. Like any new technology, some people will do wonderful things with it, others will advertize and the rest of us will make a mess.

BotSpot was grown out of Techshop in Menlo Park,  by artists/roboticists Carter and Wayne. They have built a programmable mobile platform with retractable pen sleeve. They are turning to kickstarter for funds to finish the image mashup/generation software and add aerosol handler hardware.

The role that kickstarter is playing in growing new robot businesses is fascinating in it’s own right. Some other recent success stories include Ninja Blocks – creating web interface for arduino to make the internet of things easy, Romo – the android mobile robot from Romotiv and 3d Printrbot… not truly robotic but part of the brave new world of making things and things making.

BotSpot – A Venture into Robotic Artists by G. Carter Stokum — Kickstarter.

Creativity And Robotics

This CNN article featuring Heather Knight and Data gives a good perspective on the different meanings of creativity, perception and processing at the intersection of human and machine. I would like to see more about how Data (or other robots) perceive and react but I guess that’s too much hard thought for popular write ups… unless someone like Heather has already converted robot expression into a ‘quirky’ human form.

Data, Heather Knight’s Nao robot, is a stand up comic, if you weren’t already familiar with their work from films, Ted talks, live performances, conferences.

Roboticist sees improvisation through machine’s eyes – CNN.com.

The ethnography of robots

Finally, someone who gets the distinction between robots and robotics and how most human-robot interaction is really human-human proxy. Stuart Geiger, PhD student at Berkeley School of Informatics was interviewed by Heather Ford for Ethnography Matters. This relates very well to my supervisor Chris Chesher’s recent work on asymmetric metacommunication between human and robot. [in press]

The ethnography of robots.

via The ethnography of robots.

HRI2012 Reflections

Highlights of the recent Human-Robot Interaction conference in Boston for me included meeting many great researchers and having some of my basic approaches and assumptions validated. In no particular order, I’ve decided to capture some of my observations.

Hiroshi Ishiguro was on the panel on telepresence (with Leila Takayama from Willow Garage, Peter Vicars from vGo and Stephen Von Rump from Giraff). Ishiguro is very interested in exploring the least amount of personality and embodiment that is needed for telepresence, which is almost the exact opposite approach to his famous android clones. The elfoid is also perhaps being used as an android phone (pun intended?).

Solace Shen from University of Washington delivered the best conference paper “Do People Hold a Humanoid Robot Morally Accountable For The Harm It Causes?” by Peter H. Kahn Jr., Takayuki Kanda, Hiroshi Ishiguro, Brian T. Gill, Jolina H. Ruckert, Solace Shen, Heather E. Gary, Aimee L. Reichert, Nathan G. Freier, Rachel L. Severson. (Also won the informal award for most number of co-authors.) The University of Washington is one of the few locations that puts a focus on either cultural or ethical issues with robot interaction.

Other papers I really liked included “Consistency in Physical and On-screen Action Improves Perceptions of Telepresence Robots” by David Sirkin & Wendy Ju from Stanford, alongside works on proxemics by others including, Michael Gielniak & Andrea L. Thomaz.

In fact, Georgia Institute of Technology had many strong papers/presenters. “Trajectories and Keyframes for Kinesthetic Teaching: A Human Robot Interaction Perspective” by Baris Akgun, Maya Cakmak, Jae Wook Yoo, Andrea Lockerd Thomaz was solid pragmatic work for robot skill learning. “The Domesticated Robot: Design Guidelines for Assisting Older Adults to Age in Place” by Jenay M. Beeer, Cory-Ann Smarr, Tiffany L. Chen, Akanksha Prakash, Tracy L. Mitzner, Charles C. Kemp, Wendy A. Rogers, provided rich data for designing robots that are useful and acceptable in the home.

In fact, all the papers in that particular session chaired by Astrid Weiss, ‘Living and Working with Service Robots’, involved deep, rich or longitudinal studies. (Someone remarked to me how much they appreciated robot-interaction studies that lasted longer than 10 minutes.) I’m looking forward to more from Caroline Pantofaru’s people-centred design approach to robot house organizing and Selma Sabanovic’s creative work on domestic robot embodiment.

There were also good videos and discussions about the role of robots in STEM from Ross Mead (BotBall), Andrew Stout (Aldebaran Nao) and David Robert & Cynthia Brezeal’s paper “Blended Reality Characters”, which demonstrated the Gray Walter inspired alphabet block robot that transitions between the floor area and the large screen via a turtlebot hutch.

The first day workshop on gaze lead by Bilge Mutlu would have been very interesting, given my film making and cultural theory background. However, I did a ROS/rosbridge workshop and drove a turtlebot around. This promises to be a handy skill if the robot dance project I’m collaborating on moves in a more robotic direction (pun intended).