Category Archives: methods

A call for debate on robot policy

alien-leaders

The 1953 New Yorker cartoon that started the “Take me to your leader” meme showed two aliens newly arrived on earth asking a donkey to, effectively, give them policy guidance. This is exactly what our ‘brave new’ human-robot world looks like. Complex technologies can have profound and subtle impacts on the world and robotics is not only a multidisciplinary field, but one which will have impact on every area of life. Where do we go for policy?

Ryan Calo’s recent report for the Brookings Institute, “The Case for a Federal Robotics Commission”, calls for a central body to address the issue of lack of competent and timely policy guidance in robotics. For example, the US risks falling far behind other countries in the commercial UAV field due to the failure of the FAA to produce regulations governing drones. Calo points out the big gap between policy set at the research level ie. OSTP and at the commercial application end of the scale ie. FAA.

However, with robotics being a technology applicable in almost every domain, there will always need to be multiple governing bodies. One central agency is insufficient. Perhaps the answer lies in central information points, like the Brookings Institute, or Robohub, which provides a bridge between robotics researchers and the ‘rest of the world’. Informed discussion is at the heart of democracy and in a complex technical world, scientists, social scientists and science communicators must lead the debate.

I suggest that our current robotics policy agenda needs to be reformed and better informed. This article provides a review of some recent policy reports and considers the changing shape of 21st century scientific debate. In conclusion, I make several recommendations for change:

  1. The creation of a global robotics policy think tank.
  2. That the CTO of USA and the global equivalents make robotics a key strategy discussion.
  3. That a US Robotics Commission is created – while robotics is an emerging field – to implement a cross disciplinary understanding of this technological innovation and its impacts at all levels of society.
  4. That funding bodies make grants available for cross disciplinary organizations engaged in creating a platform for informed debate on emerging technologies.

The Pew Report and the problem with popular opinion

Much of today’s information comes via the media and popular opinion, from policy, analysis or government groups that are just plain out of touch, or unable to absorb or use information across disciplines. In the worst cases a feedback loop is created, of bad opinions being repeated until they are accepted as truth. Recent reports from the Brookings Institute and the Pew Research Center demonstrate both the good and the bad of current policy debates.

The recent widely reported Pew Research Center Report on “AI, Robotics and the Future of Jobs” highlights the ridiculousness of the situation. The report canvassed more than 12,000 experts sourced from previous reports, targeted list serves and subscribers to Pew’s research, who are largely professional technology strategists. 8 broad questions were presented, covering various technology trends. 1,896 experts and members of the interested public responded to the question on AI and robotics.

The problem is that very few of the respondents have more than a glancing knowledge of robotics. To anyone in robotics, the absence of people with expertise in robotics and AI is glaringly obvious. While there are certainly insightful people and opinions in the report, the net weight of this report is questionable, particularly as findings are reduced to executive summary level comments such as;

“Half of these experts (48%) envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers – with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.”

These findings are simply popular opinion without basis in fact. However, the Pew Research Center is well respected and considered relevant. The center is a non-partisan organization which provides all findings freely “to inform the public, the press and policy makers”, not just on the internet and future of technology, but on religion, science, health, even the impact of the World Cup.

How do you find the right sort of information to inform policy and public opinion about robotics? How do you strike a balance between understanding technology and understanding the social implications of technology developments?

Improving the quality of public policy through good design

Papers like Heather Knight’s “How Humans Respond to Robots” or Ryan Calo’s “The Case for a Federal Robotics Commission” for the Brookings Institute series on “The Future of Civilian Robotics”, and organizations like Robohub and the Robots Association, are good examples of initiatives that improve public policy debate. At one end of the spectrum, an established policy organization is sourcing from established robotics experts. At the other end, a peer group of robotics experts is providing open access to the latest research and opinions within robotics and AI, including exploring ethical and economic issues.

Heather Knight’s report “How Humans Respond to Robots: Building Public Policy through Good Design” for the Brookings Institute is a good example of getting it right. The Brookings Institute is one of the oldest and most influential think tanks in the world, founded in Washington D.C. in 1916. The Brookings Institute is non-partisan and generally regarded as centrist in agenda. Although based in the US, the institute has global coverage and attracts funding from both philanthropic and government sources including, the govts of the US, UK, Japan, and China. It is the most frequently cited think tank in the world.

Heather Knight is conducting doctoral research at CMU’s Robotics Institute in human-robot interaction. She has worked at NASA JPL and Aldebaran Robotics, she cofounded the Robot Film Festival and she is an alumnus of the Personal Robots Group at MIT. She has degrees in Electrical Engineering, Computer Science and Mechanical Engineering. Here you have a person well anchored in robotics with a broad grasp of the issues, who has prepared an overview on social robotics and robot/society interaction. This report is a great example of public policy through good design, if it does indeed makes its way into the hands of people who could use it.

As Knight explains, “Human cultural response to robots has policy implications. Policy affects what we will and will not let robots do. It affects where we insist on human primacy and what sort of decisions we will delegate to machines.”  Automation, AI and robotics is entering the world of human-robot collaboration and we need to support and complement the full spectrum of human objectives.

Knight’s goal was not to be specific about policy but rather to sketch out the range of choices we currently face in robotics design and how they will affect future policy questions, and she provides many anecdotes and examples, where thinking about “smart social design now, may help us navigate public policy considerations in the future.”

Summary: “How Humans Respond to Robots”

Brookings Report

Firstly, people require very little prompting to treat machines or personas as having agency. Film animators have long understood just how simple it is to turn squiggles on the screen into expressive characters in our minds and eyes. We are neurologically coded to follow motion and to interpret even objects as having social or intentional actions. This has implications for future human relationships as our world becomes populated with smart moving objects, many studies show that we can bond with devices and even enjoy taking orders from them.

There is also the impact of the “uncanny valley” – a term that describes the cognitive dissonance created when something is almost, but not quite, human. This is still a fluid and far from well-understood effect, but it foreshadows our need for familiarity, codes and conventions around human-robot interactions. Film animators have created a vocabulary of tricks that create the illusion of emotion. So, too, have robot designers, who are developing tropes of sounds, colors, and prompts (that may borrow from other devices like traffic lights or popular culture) to help robots convey their intentions to people.

With regard to our response to robots, Knight draws attention to the fallacy of generalization across cultures. Most HRI or Human-Robot Interaction studies show that we also have very different responses along other axes, such as gender, age, experience, engagement etc. regardless of culture.

Similarly, our general responses have undergone significant change as we’ve adapted to precursor technologies such as computers, the internet and mobile phones. Our willingness to involve computers and machines in our personal lives seems immense, but raises the issues of privacy and also social isolation as well as the more benign prospects of utility, therapy and companionship.

As well as perhaps regulating or monitoring the uses of AI, automation and robots, Knight asks: do we need to be proactive in considering the rights of machines? Or at least in considering conventions for their treatment? Ethicists are doing the important job of raising these issues, ranging from what choices an autonomous vehicle should make in a scenario where all possible outcomes involove human injury, or if we should ‘protect’ machines in order to protect our social covenants with real beings. As Kant said in his treatise on ethics, we have no moral obligation towards animals, and yet our behavior towards them reflects our humanity.

“If he is not to stifle his human feelings, he must practice kindness towards animals, for he who is cruel to animals becomes hard also in his dealings with men.” Kant

This suggests that, as a default, we should create more machines that are machine-like, machines that by design and appearance telegraph their constraints and behaviors. We should avoid the urge to anthropomorphize and personalize our devices, unless we can guarantee our humane treatment of them.

Knight outlines a human-robot partnership framework across three categories: Telepresence Robots, Collaborative Robots and Autonomous Vehicles. A telepresence robot is comparatively transparent, acting as a proxy for a person, who provides the high level control. A collaborative robot may be working directly with someone (as in robot surgery) or be working on command but interacting autonomously with other people (ie. delivery robot). An autonomous vehicle extends the previous scenarios and may be able to operate at distance or respond directly to the driver, pilot or passenger.

The ratio of shared autonomy is shifting towards the robot, and the challenge is to create patterns of interaction that minimize friction and maximize transparency, utility and social good. In conclusion, Knight calls for designers to better understand human culture and practices in order to frame issues for policy makers.

Brookings Institute and NY Times: Creating a place for dialogue

The Brookings Institute also released several other reports on robotics policy directions as part of their series onThe Future of Civilian Robots, which culminated in a panel discussion. This format is similar to the NY Times Room for Debate, which brings outside experts together to discuss timely issues. However, there is a preponderance of law, governance, education and journalist experts on the panels, perhaps because these disciplines attract multidisciplinary or “meta” thinkers.

Is this the right mix? Are lawyers the right people to be defining the policy scope of robotics? Ryan Calo’s contribution to robotics as a law scholar has been both insightful and pragmatic, and well beyond the scope of any one robotics researcher or robot business. However, Calo has made robotics and autonomous vehicles his specialty area and has spent years engaged in dialogue with many robotics researchers and businesses.

Before moving to the University of Washington as Faculty Director of their new Tech Policy Lab, Calo was the Director of Robotics and Privacy at Stanford Law School’s Center for Internet & Society. Calo has an AB in Philosophy from Dartmouth College and a Doctorate in Law, cumme laude, from the University of Michigan. His writings have won best paper at conferences, have been read to the Senate, have provoked research grants, and have been republished in many top newspapers and journals.

Which comes first, the chicken or the egg? As technologies become more complex, can social issues be considered without a deep understanding of the technology and what it can or can’t enable? Equally, is it the technology that needs to be addressed or regulated, or is it the social practices, which might or might not be changed as we embrace new technologies?

It’s not surprising that lawyers are setting the standard for the policy debate, as writing and enacting policy is their bread and butter. But the underlying conclusion seems to be that we need deep engagement across many disciplines to develop good policy.

Summary: “The Case for a Federal Robotics Commission”

federal_robotics_header_final5_990x450

When Toyota customers claimed that their cars were causing accidents, the various government bodies involved called on NASA to investigate the complex technology interactions and separate mechanical issues from software problems. Ryan Calo takes the position that robotics, as a complex emerging technology, needs an organization capable of investigating potential future issues and shaping policy accordingly.

Calo calls on the US to create a Federal Robotics Commission, or risk falling behind the rest of world in innovation. Current bodies are ill-equipped to tackle “robotics in society” issues other than in piecemeal fashion. Understanding robotics requires cross-disciplinary expertise, and the technology itself may make possible new human experiences across a range of fields.

“Specifically, robotics combines, for the first time, the promiscuity of data with physical embodiment – robots are software that can touch you.” says Calo.

Society is still integrating the internet and now “bones are on the line in addition to bits”. There may be more victims, but how do we identify the perpetrators in a future full of robots? Law is, by and large, defined around human intent and foreseeability, so current legal structures may require review.

Calo considers the first robot-specific law passed by Nevada in 2011 for “autonomous vehicles”, which defined autonomous activity in a way that included most modern car behaviors, and thus had to be repealed. Where that error was due to a lack of technical expertise, Calo foresees the problem of a new class of behaviors being introduced.

Human driving error accounts for tens of thousands of fatalities. While autonomous vehicles will almost certainly reduce accidents, they might create some accidents that would not have occurred if humans were driving. Is this acceptable?

Calo also describes the ‘underinclusive’ nature of robotics policy, citing the FAA developing regulations for drones, which often serve as delivery mechanism for small cameras. However, the underlying issue of privacy is raised any time small cameras are badly deployed; in trees, on phones, on poles, or planes, or birds, not just in drones.

Other issues raised by Calo include: the impact of high frequency automated activity with real world repercussions; the potential for adaptive, or ‘cognitive’, use of communications frequencies; and potential problems swapping between automated and human control of systems, if required by either malfunction or law.

Calo then describes his vision for a Federal Robotics Commission modeled on similar previous organizations. This FRC would advise other agencies on policy relating to robots, drones or autonomous vehicles, and also advise federal, state and local lawmakers on robotics law and policy.

The FRC would convene domestic and international stakeholders across industry, government, academia and NGOs to discuss the impact of robotics and AI on society, and could potentially file ‘friend of the court’ briefs in complex technology matters.

Does this justify the call for another agency? Calo admits that there is overlap with the National Institute of Standards and Technology, the White House Office of Science and Technology Policy, and the Congressional Research Service. However, he believes that none of these bodies speaks to the whole of the “robotics in society” question.

Calo finishes with an interesting discussion with Cory Doctorow, about whether or not robotics could be considered separate to computers “and the networks that connect them”. Calo posits that the physical harm an embodied system, or robot, could do is very different to the economic or intangible harm done by software alone.

In conclusion, Calo calls for a Federal Robotics Commission to take charge of early legal and policy infrastructure for robotics. It was the decision to apply the First Amendment to the internet, and to immunize platforms for what users do, that allowed internet technology to thrive. And has, in turn, created new 21st century platforms for legal and policy debate. 

Robohub – Using 21st century tools for science communication

 Robohub

In the 21st century, science has access to a whole new toolbox of communications. Where 19th century science was presented as theater, in the form of public lectures and demonstrations, 20th century science grew an entire business of showcases, primarily conferences and journals. New communication mediums are now disrupting established science communication.

There is an increasing expectation that science can be turned into a top 500 Youtube channel, like Minute Physics, or an award winning twitter account, like Neil De Grasse Tyson’s @neiltyson which has 2.34 million followers. We are witnessing the rise of MOOCs (multi person open online courses) like the Khan Academy, and Open Access journals, like PLOS, the Public Library of Science.

Berkeley University has just appointed a ‘wikipedian-in-residence’, Kevin Gorman. The ‘wikiepedian-in-residency’ initiative started with museums, libraries and galleries, making information about artifacts and exhibits available to the broader public. This is a first however for a university and the goal is twofold: to extend public access to research that is usually behind paywalls or simply obscure; and to improve the writing, researching and publishing skills of students. Students are encouraged to find gaps in wikipedia and fill them, with reference to existing research.

In between individual experts and global knowledge banks there is space for curated niche content. Robohub is one of the sites that I think can play an integral role in both shaping the quality of debate in robotics and expanding the science communication toolbox. (Yes, I’m deeply involved in the site, so am certainly biased. But the increasing number of experts who are giving their time voluntarily to our site, and the rising amount of visitors, give weight to my assertions.)

Robohub had its inception in 2008 with the birth of the Robots Podcast, a biweekly feature on a range of robotics topics, now numbering more than 150 episodes. As the number of podcasts and contributors grew, the non-profit Robots Association was formed to provide an umbrella group tasked with spinning off new forms of science communication, sharing robotics research and information across the sector, across the globe and to the public.

Robohub is an online news site with high quality content, more than 140 contributors and 65,000 unique visitors per month. Content ranges from one-off stories about robotics research or business, to ongoing lecture series and micro lectures, to inviting debate about robotics issues, like the ‘Robotics by Invitation’ panels and the Roboethics polls. There are other initiatives in development including report production, research video dissemination and being a hub for robotics jobs, crowdfunding campaigns, research papers and conference information.

In lieu of a global robotics policy think tank, organizations like Robohub can do service by developing a range of broad policy reports, or by providing public access to a curated selection of articles, experts and reports.

In Conclusion

“Take me to your leader?” Even if we can identify our leaders, do they know where we are going? I suggest that our current robotics policy agenda needs to be reformed and better informed. This article provides a review of some recent policy reports and considers the changing shape of 21st century scientific debate. In conclusion, I make several recommendations for change:

  1. The creation of a global robotics policy think tank.

I believe that a global robotics policy think tank will create informed debate across all silos and all verticals, a better solution than regulation or precautionary principle.

  1. That the CTO of USA and the global equivalents make robotics a key strategy discussion.

Robotics has been identified as an important global and national economic driver. The responsibility or impetus to bridge silos, preventing both policy and innovation, must come from the top.

  1. That a US Robotics Commission is created – while robotics is an emerging field – to implement a cross disciplinary understanding of this technological innovation and its impacts at all levels of society.

At a national rather than a global level, NASA is stepping in to bridge the gaps between technology developed under the aegis of bodies, like OSTP, NSF, DARPA etc. and the end effector regulatory bodies, like the DOF, DOA, DOT etc. Perhaps a robotics specific organization or division within NASA is called for.

  1. That funding bodies make grants available for cross disciplinary organizations engaged in creating a platform for informed debate on emerging technologies.

Organizations that are cross disciplinary with a global reach are very hard to get funded, as most funding agencies restrict their contributions, either locally or by discipline. A far reaching technology like robotics needs a far reaching policy debate.

Dancing with robots

Robots parodying the latest video hits are cute but some choreographers, artists and human-robot interaction specialists have pushed the boundary of how humans and robots move in fascinating ways. Thomas Freundlich has just uploaded a video of his work “Human Interface” with ABB industrial robots, which spurred me to post a snapshot or two from the history of robot choreography.

Human Interface” is an evening length piece for 4 dancers, 2 human and 2 robots and is an extension of Freundlich’s 2008 work “Actuator”. Freundlich programs the industrial arms himself, using the ABB Robot Studio software and the Safe Mode capabilities, allowing humans to cowork with robots. Freundlich is himself one of the dancers and finds that robots can make very nuanced dancers with the ability to consistently repeat very finely tuned movements. “Human Interface” premiered at the Zodiak Center in Helsinki in 2012 to rave reviews.

“If someone still thinks contemporary dance is a joke, they would do well to make their way to the Pannuhalli stage of the Cable Factory and reconsider their opinion. There, a spectacle awaits: Two real industrial robots and two dancers, along with a world-class stage designer and musician offer an experience reminiscent of James Cameron’s film Avatar (2009). For me, this dance work was more three-dimensional and scarier than the film.”
– Marja Hannula, Helsingin Sanomat, Finland’s leading daily newspaper, May 24th 2012

Both Staubli and Kuka have also produced dancing robots, with Staubli’s RoboLounge homage to Daft Punk and Kuka’s synchronized robot arms which are also used by robot cinematography company BotNDolly. But Freundlich’s work is more closely aligned with pioneering human/machine choreography by the likes of Margie Medlin, Gideon Obarzanek and Margo Apostolo.

Gideon Obarzanek, of Chunky Move, is renowned for utilizing digital technologies, lasers, motion capture and projection. In recent works, like Connected, Obarzanek inverts his technological aesthetic in partnership with sculptor Reuben Margolin, to create a work which animates both the body and the machine through physical connection between the dancers and Margolin’s purpose-built, kinetic sculpture.

Reuben’s startlingly live sculptural works – constructed from wood, re-cycled plastic, paper and steel – transcend their concrete forms once set into motion, appearing as natural waveforms in a weightless kinetic flow. Suspended by hundreds of fine strings receiving information from multiple camshafts and wheels, his sculptures reveal in articulate detail the impulses of what they are coupled to. In Connected, it is people – athletic and agile dancers’ bodies twisting and hurtling through space, as well as people in recognisable situations.

Beginning with simple movements and hundreds of tiny pieces, the dancers build their performance while they construct the sculpture in real time. During the performance, these basic elements and simple physical connections quickly evolve into complex structures and relationships.

All gods are homemade, and it is we who pull their strings, and so, give them the power to pull ours. (Aldous Huxley).

” Obarzanek seems to function in many ways as an irritant, disrupting our comfortable experiences of dance, confounding notions of illusion and representation, and disturbing the criteria by which dance might be judged good or bad.” [THE AGE]

However, Obarzanek rides on the shoulders of pioneering moving image, moving body choreographers like Margie Medlin. With a background in film and dance, Medlin has crossing the boundaries of art and science for well over 25 years. Her recent installations devise software and hardware tools that create a highly intelligent reflection on dance through the media of new technology.

Medlin’s Quartet Project, from 2004 to 2007, was a dance, music, new-media and robotic performance that observes and articulates communication and perception of the human body. It will explore and create real-time relationships between music, the gesture of playing music, dance, robotics and animation. Quartet was a collaboration between artists, technicians and scientists; with Stevie Wishart: musical director; Rebecca Hilton: choreographer; Holger Deuter (DNA 3d): animation / interactive / motion capture / real – time set; Gerald Thompson: motion control camera robot; Nick Rothwell: interface designer. The biomedical science of hearing implemented in Quartet was produced in association with The Physiology Lab, University of Cambridge.

The Quartet project commissioned complex tools to create visual bridges between cyberspace, augmented reality and physical space. These systems present a versatile and creative process for experimenting with cause and effect in multiple media; an insight into what it means to transform one medium or gesture into a completely different one. Technically theses tools create a motion capture system, combining two skeletons, one from the data of a dancer and one from the data of a musician. Together they explore the choreography of cinematic space and the poetics of looking and moving.

Quartet was a project to develop a real-time interactive robot to perform live in stage with a dancer and a musician. Advanced motion control technology was used to capture the dancer’s movements. I chose motion sensors made by Microstrain in the US. These were interfaced via a serial data protocol radio link devised by Glen Anderson and converted to motor control signals at the robot. Movement data could also be simultaneously recorded by a separate computer running Motion Builder software, as well as control a 3D Avatar which was projected onto a screen behind the performer.[Gerald Thompson]

Robot choreography can be traced back through the work of Margo Apostolos, both live and in publication, from “A comparison of the artistic aspects of various industrial robots” [1988] and “Robot Choreography” [1990], to her more recent work with Mark Morris. Dr. Apostolos was instrumental in bringing internationally-renown director/choreographer Mark Morris to USC for a workshop that integrated motion-capture and robotics with modern dance. Robot Choreography was developed as an artistic scientific collaboration to explore an aesthetic dimension of robotic movement. Robots and control techniques developed based on biological principles can assist in the transference of techniques developed for human choreography to programming aesthetic robot motion. The resultant form of choreographed robot movement integrated art and technology as a possible new art form with relevant research implications.

Dr. Apostolos is Director of Dance and Associate Professor in the USC School of Dramatic Arts. She has authored and presented numerous articles on her research and design in Robot Choreography. In addition to her doctoral and post-doctoral studies at Stanford University, she earned an M.A. in Dance from Northwestern University. She has served as visiting professor in the Department of Psychology at Princeton University and has taught in Chicago, San Francisco, at Stanford University, Southern Illinois University and California Polytechnic State University-San Luis Obispo. A recipient of the prestigious NASA/ASEE Summer Faculty Fellowship, Dr. Apostolos worked for NASA at Jet Propulsion Laboratory/Caltech as a research scientist in the area of space telerobotics.

“The Robot Etudes”, was published in 2010 by students in the department of Immersive Kinematics at the University of Pennsylvania, outlining Apostolos contribution to robot choreography, the history of robotics and theater and some of the research and pragmatic implications for ongoing work in human-machine interaction.

In spring of 2010, architecture and engineering students at the University of Pennsylvania were teamed together to create artistic mechatronic robotic devices. The context for their creations was Shakespeare’s A Midsummer Night’s Dream. This became a joint effort between professors from Mechanical Engineering and Architecture and a director from a professional theater troupe instructing a group of students to develop a performance performed by the Pig Iron Theatre Troupe at the Annenberg Center called The Robot Etudes. Whereas robots have been used in theater before and artistic directors have instructed technicians to develop special effects robots, developing robotic elements specifically for theater with a diverse set of creative innovators is new. This paper focuses on the process by which the play was formed and the successes and struggles in forming a cooperative experiment between three very different disciplines.

Immersive Kinematics is a collaboration between Penn Engineering and Penn Design and expands the roles of architecture and engineering focusing on integrating robotics, interaction, and embedded intelligence in our buildings, cities, and cultures. The group offers a class teaming architecture and engineering students in mechatronic projects.

This article on “Dancing with Robots” can only offer a small taste of some of the amazing works of collaboration, between humans and robots and between artists, engineers and scientists. A while ago, I also reviewed the SEAM 2010 exhibition in Sydney which showcased many other works of interactive machine human aesthetic, both digital, virtual and mechanical. From Stelarc, Obarzanek and Medlin, to Paul Granjon, Petra Gemeinbock, Frederic Bevilacqua, Chris Ziegler and many more.

The human-robot interaction history is much richer and more nuanced than the current crop of cute robot dance videos would suggest. Although, if Aldebaran’s plans for a robot dance competition take off, then perhaps they will be inspiring a new generation of collaborative human-robot artists.

SciFi, Design and Technology

Make It So: What Interaction Designers can Learn from Science
Fiction Interfaces
Presentation Notes, Nathan Shedroff and Chris Noessel
4 September 2009, dConstruct 09 Conference, Brighton, UK

(also SXSW 2012?)

Image

This is the first presentation of only a portion of the material we’ve found in our analysis of Science Fiction films and television series. Weʼre also looking a industry future films (like Apple’s Knowledge Navigator) as well as existing products and research projects. Our analysis includes properties (films and TV), themes (different issues in interface design), as well as the historical context of the work (such as the current technology of the time of the propertyʼs release). In addition, weʼre interviewing developers (including production designers from  films) but this material isnʼt presented in this talk. For this presentation, weʼve focused on the major issues, part academic and theoretical, and part lessons (more practical) weʼve uncovered.

How design influences SciFi and how SciFi influences design:

We’ve chosen to focus on interface and interaction design (and not technology or engineering). Some visual design issues relate but, mostly, in this talk, weʼre not approaching issues of styling. Weʼve chosen the media of SciFi (TV and films) because a thorough analysis of interaction design in SciFi requires that the example be visual so interfaces are completely and concretely represented, include motion that describe the interaction, and (sometimes) has been seen by a wide audience.

Scientifically determining “influence” in any context (whether from Design on SciFi or visa versa) is difficult, and much of what we illustrate is inference on the part of the authors.

Melanie Hall: Science at the movies: Prometheus and artificial intelligence

search for the origins of humanity, meeting one’s maker, and discovering why we are here: Ridley Scott’s latest film Prometheus tackles some big themes. But arguably the most interesting one surrounds the issue of what it is to be human, raised in the form of the android David.

Both Alien and its sequel Aliens, which Prometheus is said to be a prequel to (although Ridley Scott has disputed this, only conceding that the films all inhabit the same universe), included androids in their crew.

But in Prometheus, the android’s story is shifted more to centre, focusing on what defines humanity, and whether a robot can ever hope to achieve it.

via Melanie Hall: Science at the movies: Prometheus and artificial intelligence.

The Future for Robotics (2)

frog’s Creative Director, Scott Jenson, first UI designer at Apple and recent head of UX at Google mobile, recently blogged about smart devices and how that changes the design process. this is relevant to the very real near future of robotics. I’m continuing the zeitgeist sampling here.

Triumph of the Mundane

By Scott Jenson – April 18, 2012

Smart devices require a significant shift in thinking

This blog explores how to design smart devices. But these new devices are just so new and require such new insights, that our quaint, old school notions of UX design are completely blinding us. We are stuck between the classic paradigm of desktop computers, and the futuristic fantasy of smart dust. The world is either fastidious or fantastic. The path ahead is hard to see. Alan Kay said the best way to predict the future is to invent it… but what if we don’t know what we want?

Coffee Maker Syndrome
I’ve long proposed just-in-time interaction as a core approach to smart devices but while presenting on this topic over the past year, it has astounded me that people have such a hard time just thinking about the overall landscape of smart devices. Take for example this tweet:

    Overheard  at  #CES:  “I’m  pretty  sure  my  coffee  maker  doesn’t NEED  apps.

On the face of it, this makes perfect sense. It seems unlikely you’ll be reading your email on your coffee maker. But this dismissive approach is an example of what Jake Dunagan has called “the crackpot realism of the present”. We are so entrenched in our current reality that we dismiss any exploration of new ideas. By stating that apps on a coffee maker would be silly (which is true), we easily dismiss any discussion of other potential visions of functionality.

When television was first introduced, the earliest programs were literally radio scripts read aloud in front of the camera. Radio had been dominant for decades so broadcasters just coasted into TV without thinking creatively about how to approach the medium differently. As Marshall McLuhan said, “We look at the present through a rearview mirror; we walk backwards into the future.”

Smart devices require three big shifts
Assuming that smart devices require apps is like walking backwards into the future. We don’t need our smart devices to run Microsoft office, we just need to them to, say, log their electrical usage (internal, invisible functionality) or give us quick how-to videos (simple user facing functionality).

If we want to properly discuss how to design smart devices, we must appreciate how they shift away from standard computers in three significant ways: micro functionality, liberated interaction, and a clustered ecosystem.

Shift 1: Micro Functionality
In my last post I discussed a fundamental UX axiom that Value must be greater than Pain. This handy little axiom implies many useful theorems. The most radical is that if pain gets very low, the actual value can also be low. While expensive tablets demand significant functional apps, cheap processing allows for more humble micro functionality. It’s one of the biggest hurdles that people have in discussing smart devices. They are so entrenched in the PC paradigm that they assume every device with a CPU must be bristling with functionality.

However, simple doesn’t equate to useless. For example, whenever I offer up the possibility of a ‘smart toaster’ people often chuckle; it’s the coffee maker syndrome all over. But there are lots of small and even fun things a toaster could do: log it’s electrical usage, offer up an instructional video on how to clean the crumb tray, report any diagnostic errors, call the support line for you, or even tweak it’s ‘toast is done’ sound. All of these are fairly trivial but are still useful if a) the value is genuine and b) the cost of adding the functionality is small. $600 tablets must to do quite a bit but this isn’t true for a $40 toaster.

The biggest impact of micro functionality is in how little real interactive power is required. So often when I talk of using HTML5 as the lingua franca of smart devices, people trot out the ‘it can’t do X’ argument, extolling the superiority of native apps. But micro functionality is so basic and simple that HTML5 is more than adequate: you’ll normally only need to view or change a simple value. Micro functionality only requires micro expressivity.

Shift 2: Liberated Interaction
Remember that Value must be > Pain. Micro functionality requires micro pain to be viable. No one is going to argue with their toaster; this type of functionality has to be quick, fast, and easy. Unfortunately, the trend today is that any device with functionality will usually have a tiny display, tinier buttons, a complex user manual, and a tech support line.

Smart devices need to be liberated from being solely responsible for all interaction. I’ve written previously about just-in-time interaction which allows any smart display (a phone, tablet, TV, interactive goggles, and yes, a laptop) to interact with a smart device. Using a significantly more capable device is so much better than cobbling together a cheap LCD display with fiddly little buttons on the device itself. A generation raised on rich phone interaction will expect, even demand better.

Moving interaction to smart displays also has a huge benefit for manufacturers. The cost of computation will likely be the least of a manufacturer’s concerns. Small displays, buttons, complex instruction manuals, and tech support lines are all very expensive. What if manufacturers could assume that any smart device they built would have free access to a big interactive color screen? Not only that but it would have excellent computing power, smooth animated graphics, a robust programming environment and to top it off a universally accepted consumer interaction model that didn’t require any training? Using these displays would allow enormous cost reductions, not only in parts costs, but in simpler development costs as well.

Shift 3: A Clustered Ecosystem
Once we liberate the interaction from the device, we’ve unlocked its functionality across many locations. Not only can I use my phone in front of my thermostat but also with my TV across the room and with my laptop at work across the city. By liberating functionality from devices, we liberate our ability to use these devices from anywhere. My previous post was called “Mobile Apps must die” not because apps will actually die (they will always have a role) but the shortsighted desire to funnel functionality exclusively through them must stop. If these very simple apps are written in HTML5, they can be used across any device which is very powerful indeed.

Conclusion
It is inevitable that devices will ship with interactivity built in. But as more devices become functional, it’s going to become overwhelming to have each device be it’s own island. The three shifts discussed here: micro functionality, liberated interaction, and a clustered ecosystem all point to a new pattern of thinking: small devices with small functionality that all work together in profound ways. This is a triumph of the mundane; a challenge to our PC soaked way of thinking.

But this new approach requires an open standard that all devices would use to announce their presence and offer their functionality in a universal language like HTML. In many ways we are at the cusp of a new hardware era of discoverability much like the web first had in the 80s.

What’s holding smart devices back is our oh-so-human ability to misunderstand their potential. These three shifts are a big step in understanding what we need to do. Let’s be clear, this is not technically challenging! We just need to understand what we want.  Alan Kay is right: we have to invent our future. frog, where I work, is just starting to build simple prototypes to validate these concepts. As they mature, I’ll be sharing more information about them. It’s clear that technology is not the limiting factor, it’s just our desire to imagine a different future.

blog post in the wild at frog design’s designmind

Finding a robot we need?

Today I listened to another person asking the world what sort of robot they should build. They had just told us the features that it would have and want to know what things people might use it for, so that they could find the money to build it. This is a completely back to front approach to robot building.

This is what Caroline Pantofaru, Leila Takayama, Tully Foote and Bianca Soto refer to as ‘technology push’ in their recent paper. An excerpt is quoted below, but I recommend reading the whole thing.

Exploring the Role of Robots in Home OrganizationPantofaru, Caroline., Takayama, Leila., Foote, Tully., and Soto, Bianca Proc. of Human-Robot Interaction (HRI), Boston, MA, p.327-334, (2012)

2. NEED FINDING
We present need finding as a methodological tool for quickly learning about a user space, inspiring robotics research within that space, and grounding the resulting research. Much (although certainly not all) of robotics research today is inspired by technology push – a technologist deciding to apply a technology to a problem. User-based research often does not start until after a prototype or system speci fication exists. This is a valuable method as researchers have spent years building intuition for their field.

For robotics research, need finding can provide a complementary, user-driven source of inspiration and guidance, as well as refi ning technology push ideas to better to an application space.

Need fi nding is a method that comes from the product design community [2]. The goal is to identify a set of fundamental user needs of the community a product aims to satisfy. The need finding process is summarized in Figure 2.


Figure 2: An overview of the need nding process

Need finding begins with generating empathy for the user group through interviews, and sharing that empathy with other designers and researchers through conversations and media like videos. This is a concrete and analytic process.

The results from the interviews are then abstracted into frameworks, often presented in graphical form such as the 2×2 charts in Figures 2 and 3. The lessons from the frameworks are then converted to design implications, which are meant to be generative, allowing many interesting solutions to evolve. This process can be iterated and interleaved as necessary. The process is expanded upon below, with a description of our own implementation for this paper.

2.1 Interviews and Observations
Need nding begins with identifying a community of potential product users. A very small team goes out to visit a sample of community members in the places where they do the activities that the product supports. Immersion in the
interviewee’s environment is the key to success and a distinguishing feature of need finding. This immersion inspires the participant to discuss details they might have otherwise forgotten, and allows the interviewer to quickly reconcile the interviewee’s words with reality.

It is important to note that relying on self-reported data alone is dangerous due to people’s poor memories, as well as the social desirability bias (an inclination for respondents to say things they think will
be received favorably.) There is even a standardized scale for measuring a person’s inclination toward the social desirability bias [5]. These problems can be so serious that some usability experts suggest completely ignoring what users say, and instead watching what they do [16]

The ethnography of robots

Finally, someone who gets the distinction between robots and robotics and how most human-robot interaction is really human-human proxy. Stuart Geiger, PhD student at Berkeley School of Informatics was interviewed by Heather Ford for Ethnography Matters. This relates very well to my supervisor Chris Chesher’s recent work on asymmetric metacommunication between human and robot. [in press]

The ethnography of robots.

via The ethnography of robots.