Category Archives: methods

Dancing with robots

Robots parodying the latest video hits are cute but some choreographers, artists and human-robot interaction specialists have pushed the boundary of how humans and robots move in fascinating ways. Thomas Freundlich has just uploaded a video of his work “Human Interface” with ABB industrial robots, which spurred me to post a snapshot or two from the history of robot choreography.

Human Interface” is an evening length piece for 4 dancers, 2 human and 2 robots and is an extension of Freundlich’s 2008 work “Actuator”. Freundlich programs the industrial arms himself, using the ABB Robot Studio software and the Safe Mode capabilities, allowing humans to cowork with robots. Freundlich is himself one of the dancers and finds that robots can make very nuanced dancers with the ability to consistently repeat very finely tuned movements. “Human Interface” premiered at the Zodiak Center in Helsinki in 2012 to rave reviews.

“If someone still thinks contemporary dance is a joke, they would do well to make their way to the Pannuhalli stage of the Cable Factory and reconsider their opinion. There, a spectacle awaits: Two real industrial robots and two dancers, along with a world-class stage designer and musician offer an experience reminiscent of James Cameron’s film Avatar (2009). For me, this dance work was more three-dimensional and scarier than the film.”
- Marja Hannula, Helsingin Sanomat, Finland’s leading daily newspaper, May 24th 2012

Both Staubli and Kuka have also produced dancing robots, with Staubli’s RoboLounge homage to Daft Punk and Kuka’s synchronized robot arms which are also used by robot cinematography company BotNDolly. But Freundlich’s work is more closely aligned with pioneering human/machine choreography by the likes of Margie Medlin, Gideon Obarzanek and Margo Apostolo.

Gideon Obarzanek, of Chunky Move, is renowned for utilizing digital technologies, lasers, motion capture and projection. In recent works, like Connected, Obarzanek inverts his technological aesthetic in partnership with sculptor Reuben Margolin, to create a work which animates both the body and the machine through physical connection between the dancers and Margolin’s purpose-built, kinetic sculpture.

Reuben’s startlingly live sculptural works – constructed from wood, re-cycled plastic, paper and steel – transcend their concrete forms once set into motion, appearing as natural waveforms in a weightless kinetic flow. Suspended by hundreds of fine strings receiving information from multiple camshafts and wheels, his sculptures reveal in articulate detail the impulses of what they are coupled to. In Connected, it is people – athletic and agile dancers’ bodies twisting and hurtling through space, as well as people in recognisable situations.

Beginning with simple movements and hundreds of tiny pieces, the dancers build their performance while they construct the sculpture in real time. During the performance, these basic elements and simple physical connections quickly evolve into complex structures and relationships.

All gods are homemade, and it is we who pull their strings, and so, give them the power to pull ours. (Aldous Huxley).

” Obarzanek seems to function in many ways as an irritant, disrupting our comfortable experiences of dance, confounding notions of illusion and representation, and disturbing the criteria by which dance might be judged good or bad.” [THE AGE]

However, Obarzanek rides on the shoulders of pioneering moving image, moving body choreographers like Margie Medlin. With a background in film and dance, Medlin has crossing the boundaries of art and science for well over 25 years. Her recent installations devise software and hardware tools that create a highly intelligent reflection on dance through the media of new technology.

Medlin’s Quartet Project, from 2004 to 2007, was a dance, music, new-media and robotic performance that observes and articulates communication and perception of the human body. It will explore and create real-time relationships between music, the gesture of playing music, dance, robotics and animation. Quartet was a collaboration between artists, technicians and scientists; with Stevie Wishart: musical director; Rebecca Hilton: choreographer; Holger Deuter (DNA 3d): animation / interactive / motion capture / real – time set; Gerald Thompson: motion control camera robot; Nick Rothwell: interface designer. The biomedical science of hearing implemented in Quartet was produced in association with The Physiology Lab, University of Cambridge.

The Quartet project commissioned complex tools to create visual bridges between cyberspace, augmented reality and physical space. These systems present a versatile and creative process for experimenting with cause and effect in multiple media; an insight into what it means to transform one medium or gesture into a completely different one. Technically theses tools create a motion capture system, combining two skeletons, one from the data of a dancer and one from the data of a musician. Together they explore the choreography of cinematic space and the poetics of looking and moving.

Quartet was a project to develop a real-time interactive robot to perform live in stage with a dancer and a musician. Advanced motion control technology was used to capture the dancer’s movements. I chose motion sensors made by Microstrain in the US. These were interfaced via a serial data protocol radio link devised by Glen Anderson and converted to motor control signals at the robot. Movement data could also be simultaneously recorded by a separate computer running Motion Builder software, as well as control a 3D Avatar which was projected onto a screen behind the performer.[Gerald Thompson]

Robot choreography can be traced back through the work of Margo Apostolos, both live and in publication, from “A comparison of the artistic aspects of various industrial robots” [1988] and “Robot Choreography” [1990], to her more recent work with Mark Morris. Dr. Apostolos was instrumental in bringing internationally-renown director/choreographer Mark Morris to USC for a workshop that integrated motion-capture and robotics with modern dance. Robot Choreography was developed as an artistic scientific collaboration to explore an aesthetic dimension of robotic movement. Robots and control techniques developed based on biological principles can assist in the transference of techniques developed for human choreography to programming aesthetic robot motion. The resultant form of choreographed robot movement integrated art and technology as a possible new art form with relevant research implications.

Dr. Apostolos is Director of Dance and Associate Professor in the USC School of Dramatic Arts. She has authored and presented numerous articles on her research and design in Robot Choreography. In addition to her doctoral and post-doctoral studies at Stanford University, she earned an M.A. in Dance from Northwestern University. She has served as visiting professor in the Department of Psychology at Princeton University and has taught in Chicago, San Francisco, at Stanford University, Southern Illinois University and California Polytechnic State University-San Luis Obispo. A recipient of the prestigious NASA/ASEE Summer Faculty Fellowship, Dr. Apostolos worked for NASA at Jet Propulsion Laboratory/Caltech as a research scientist in the area of space telerobotics.

“The Robot Etudes”, was published in 2010 by students in the department of Immersive Kinematics at the University of Pennsylvania, outlining Apostolos contribution to robot choreography, the history of robotics and theater and some of the research and pragmatic implications for ongoing work in human-machine interaction.

In spring of 2010, architecture and engineering students at the University of Pennsylvania were teamed together to create artistic mechatronic robotic devices. The context for their creations was Shakespeare’s A Midsummer Night’s Dream. This became a joint effort between professors from Mechanical Engineering and Architecture and a director from a professional theater troupe instructing a group of students to develop a performance performed by the Pig Iron Theatre Troupe at the Annenberg Center called The Robot Etudes. Whereas robots have been used in theater before and artistic directors have instructed technicians to develop special effects robots, developing robotic elements specifically for theater with a diverse set of creative innovators is new. This paper focuses on the process by which the play was formed and the successes and struggles in forming a cooperative experiment between three very different disciplines.

Immersive Kinematics is a collaboration between Penn Engineering and Penn Design and expands the roles of architecture and engineering focusing on integrating robotics, interaction, and embedded intelligence in our buildings, cities, and cultures. The group offers a class teaming architecture and engineering students in mechatronic projects.

This article on “Dancing with Robots” can only offer a small taste of some of the amazing works of collaboration, between humans and robots and between artists, engineers and scientists. A while ago, I also reviewed the SEAM 2010 exhibition in Sydney which showcased many other works of interactive machine human aesthetic, both digital, virtual and mechanical. From Stelarc, Obarzanek and Medlin, to Paul Granjon, Petra Gemeinbock, Frederic Bevilacqua, Chris Ziegler and many more.

The human-robot interaction history is much richer and more nuanced than the current crop of cute robot dance videos would suggest. Although, if Aldebaran’s plans for a robot dance competition take off, then perhaps they will be inspiring a new generation of collaborative human-robot artists.

SciFi, Design and Technology

Make It So: What Interaction Designers can Learn from Science
Fiction Interfaces
Presentation Notes, Nathan Shedroff and Chris Noessel
4 September 2009, dConstruct 09 Conference, Brighton, UK

(also SXSW 2012?)

Image

This is the first presentation of only a portion of the material we’ve found in our analysis of Science Fiction films and television series. Weʼre also looking a industry future films (like Apple’s Knowledge Navigator) as well as existing products and research projects. Our analysis includes properties (films and TV), themes (different issues in interface design), as well as the historical context of the work (such as the current technology of the time of the propertyʼs release). In addition, weʼre interviewing developers (including production designers from  films) but this material isnʼt presented in this talk. For this presentation, weʼve focused on the major issues, part academic and theoretical, and part lessons (more practical) weʼve uncovered.

How design influences SciFi and how SciFi influences design:

We’ve chosen to focus on interface and interaction design (and not technology or engineering). Some visual design issues relate but, mostly, in this talk, weʼre not approaching issues of styling. Weʼve chosen the media of SciFi (TV and films) because a thorough analysis of interaction design in SciFi requires that the example be visual so interfaces are completely and concretely represented, include motion that describe the interaction, and (sometimes) has been seen by a wide audience.

Scientifically determining “influence” in any context (whether from Design on SciFi or visa versa) is difficult, and much of what we illustrate is inference on the part of the authors.

Melanie Hall: Science at the movies: Prometheus and artificial intelligence

search for the origins of humanity, meeting one’s maker, and discovering why we are here: Ridley Scott’s latest film Prometheus tackles some big themes. But arguably the most interesting one surrounds the issue of what it is to be human, raised in the form of the android David.

Both Alien and its sequel Aliens, which Prometheus is said to be a prequel to (although Ridley Scott has disputed this, only conceding that the films all inhabit the same universe), included androids in their crew.

But in Prometheus, the android’s story is shifted more to centre, focusing on what defines humanity, and whether a robot can ever hope to achieve it.

via Melanie Hall: Science at the movies: Prometheus and artificial intelligence.

The Future for Robotics (2)

frog’s Creative Director, Scott Jenson, first UI designer at Apple and recent head of UX at Google mobile, recently blogged about smart devices and how that changes the design process. this is relevant to the very real near future of robotics. I’m continuing the zeitgeist sampling here.

Triumph of the Mundane

By Scott Jenson - April 18, 2012

Smart devices require a significant shift in thinking

This blog explores how to design smart devices. But these new devices are just so new and require such new insights, that our quaint, old school notions of UX design are completely blinding us. We are stuck between the classic paradigm of desktop computers, and the futuristic fantasy of smart dust. The world is either fastidious or fantastic. The path ahead is hard to see. Alan Kay said the best way to predict the future is to invent it… but what if we don’t know what we want?

Coffee Maker Syndrome
I’ve long proposed just-in-time interaction as a core approach to smart devices but while presenting on this topic over the past year, it has astounded me that people have such a hard time just thinking about the overall landscape of smart devices. Take for example this tweet:

    Overheard  at  #CES:  “I’m  pretty  sure  my  coffee  maker  doesn’t NEED  apps.

On the face of it, this makes perfect sense. It seems unlikely you’ll be reading your email on your coffee maker. But this dismissive approach is an example of what Jake Dunagan has called “the crackpot realism of the present”. We are so entrenched in our current reality that we dismiss any exploration of new ideas. By stating that apps on a coffee maker would be silly (which is true), we easily dismiss any discussion of other potential visions of functionality.

When television was first introduced, the earliest programs were literally radio scripts read aloud in front of the camera. Radio had been dominant for decades so broadcasters just coasted into TV without thinking creatively about how to approach the medium differently. As Marshall McLuhan said, “We look at the present through a rearview mirror; we walk backwards into the future.”

Smart devices require three big shifts
Assuming that smart devices require apps is like walking backwards into the future. We don’t need our smart devices to run Microsoft office, we just need to them to, say, log their electrical usage (internal, invisible functionality) or give us quick how-to videos (simple user facing functionality).

If we want to properly discuss how to design smart devices, we must appreciate how they shift away from standard computers in three significant ways: micro functionality, liberated interaction, and a clustered ecosystem.

Shift 1: Micro Functionality
In my last post I discussed a fundamental UX axiom that Value must be greater than Pain. This handy little axiom implies many useful theorems. The most radical is that if pain gets very low, the actual value can also be low. While expensive tablets demand significant functional apps, cheap processing allows for more humble micro functionality. It’s one of the biggest hurdles that people have in discussing smart devices. They are so entrenched in the PC paradigm that they assume every device with a CPU must be bristling with functionality.

However, simple doesn’t equate to useless. For example, whenever I offer up the possibility of a ‘smart toaster’ people often chuckle; it’s the coffee maker syndrome all over. But there are lots of small and even fun things a toaster could do: log it’s electrical usage, offer up an instructional video on how to clean the crumb tray, report any diagnostic errors, call the support line for you, or even tweak it’s ‘toast is done’ sound. All of these are fairly trivial but are still useful if a) the value is genuine and b) the cost of adding the functionality is small. $600 tablets must to do quite a bit but this isn’t true for a $40 toaster.

The biggest impact of micro functionality is in how little real interactive power is required. So often when I talk of using HTML5 as the lingua franca of smart devices, people trot out the ‘it can’t do X’ argument, extolling the superiority of native apps. But micro functionality is so basic and simple that HTML5 is more than adequate: you’ll normally only need to view or change a simple value. Micro functionality only requires micro expressivity.

Shift 2: Liberated Interaction
Remember that Value must be > Pain. Micro functionality requires micro pain to be viable. No one is going to argue with their toaster; this type of functionality has to be quick, fast, and easy. Unfortunately, the trend today is that any device with functionality will usually have a tiny display, tinier buttons, a complex user manual, and a tech support line.

Smart devices need to be liberated from being solely responsible for all interaction. I’ve written previously about just-in-time interaction which allows any smart display (a phone, tablet, TV, interactive goggles, and yes, a laptop) to interact with a smart device. Using a significantly more capable device is so much better than cobbling together a cheap LCD display with fiddly little buttons on the device itself. A generation raised on rich phone interaction will expect, even demand better.

Moving interaction to smart displays also has a huge benefit for manufacturers. The cost of computation will likely be the least of a manufacturer’s concerns. Small displays, buttons, complex instruction manuals, and tech support lines are all very expensive. What if manufacturers could assume that any smart device they built would have free access to a big interactive color screen? Not only that but it would have excellent computing power, smooth animated graphics, a robust programming environment and to top it off a universally accepted consumer interaction model that didn’t require any training? Using these displays would allow enormous cost reductions, not only in parts costs, but in simpler development costs as well.

Shift 3: A Clustered Ecosystem
Once we liberate the interaction from the device, we’ve unlocked its functionality across many locations. Not only can I use my phone in front of my thermostat but also with my TV across the room and with my laptop at work across the city. By liberating functionality from devices, we liberate our ability to use these devices from anywhere. My previous post was called “Mobile Apps must die” not because apps will actually die (they will always have a role) but the shortsighted desire to funnel functionality exclusively through them must stop. If these very simple apps are written in HTML5, they can be used across any device which is very powerful indeed.

Conclusion
It is inevitable that devices will ship with interactivity built in. But as more devices become functional, it’s going to become overwhelming to have each device be it’s own island. The three shifts discussed here: micro functionality, liberated interaction, and a clustered ecosystem all point to a new pattern of thinking: small devices with small functionality that all work together in profound ways. This is a triumph of the mundane; a challenge to our PC soaked way of thinking.

But this new approach requires an open standard that all devices would use to announce their presence and offer their functionality in a universal language like HTML. In many ways we are at the cusp of a new hardware era of discoverability much like the web first had in the 80s.

What’s holding smart devices back is our oh-so-human ability to misunderstand their potential. These three shifts are a big step in understanding what we need to do. Let’s be clear, this is not technically challenging! We just need to understand what we want.  Alan Kay is right: we have to invent our future. frog, where I work, is just starting to build simple prototypes to validate these concepts. As they mature, I’ll be sharing more information about them. It’s clear that technology is not the limiting factor, it’s just our desire to imagine a different future.

blog post in the wild at frog design’s designmind