Blended Reality With Robots


We are already living with robots. The future is here, but as William Gibson says, it’s not evenly distributed yet. Or as I like to say, I believe that we often don’t recognize the future when we see it.

How do we recognize robots? We usually look for humanoid robots, the stuff of science fiction. Even the classic robot ‘arm’ is part of a ‘human’. But technically, a robot is simply a machine that ‘senses, thinks and acts’. Even the ISO for industrial robots – the international standard describing industrial robot arms – is somewhat broad in definition. A robot is “an actuated mechanism, programmable in two or more axes, with a degree of autonomy, moving within its environment to perform intended tasks.”

Is a car a robot? Yes. Even without full autonomy, a car consists of many autonomous systems. Elon Musk called the Tesla S ‘a computer in the shape of a car’. But really, it’s a robot.

Is a washing machine a robot? Visually, we would find it hard to think of it as a robot. All the ‘humanoid’ bits are hidden inside a box. Yet the modern washing machine, soon to be a washing-drying-folding machine, is a very sophisticated piece of machinery, sensing, thinking and acting in the environment.

Because we see the world through human eyes, it is very hard for us to see things outside human categories. We divide the world into humans, and things. Robots change everything. As we build robots, we are really reshaping what it means to be human. What does it mean when our devices start to look like humans? What does it mean when they don’t? And what does it mean when we use so many different devices to communicate with other people?


Our technologically blended reality is asynchronous, mediated and indirect. Our technologies allow us to communicate across distance and time, and expand our scale, creating a larger richer world. This is nothing new. Civilization is the story of technology taming space and time.

Since we invented writing, we’ve been able to communicate with other people at a distance, at different times and at larger scale than direct communication. And as we invented reproduction technologies, like the printing press and photography, the scale of our communications increased. This has had a huge impact on the world, reshaping our cultural, religious and political structures.

The last 200 years has seen the introduction of many new communication technologies, telegraph, telephone, radio and television. But one thing they’ve all had in common. Until very recently, we’ve been able to see who is ‘pulling the strings’. The subject or object of communication has been visible or known.

In the last decade, we’ve seen an explosion of information and communication technologies and we’ve gone wireless and unplugged. Internet technologies in the 80s and 90s were supposed to usher in an era of anonymity, but in reality they largely just increased the scale of known communications. And our connections to the devices of communication were much more obvious.


As social beings, our reality is very much defined by our communication technologies. These days, even when we are in the same physical place as other people, we are no longer sharing the same reality. We are experiencing different worlds, as if we were in our own reality bubble.

And even when we are communicating, we are no longer certain to be communicating with other people. Ray Kurzweil predicts that in the future we will mainly communicate with machines and not other people. We will experience this technologically blended reality as an extension of ourselves, as a proxy for other people and has its own ‘alien’ identity.


Sharp’s new Robohon phone, created by Tomotaki Takahashi, is the epitomy of a blended reality device. It acts as an extension of ourselves. It provides a proxy for other people, and it has its own very distinct identity. Our categories of ‘you’, ‘me’ and ‘it’ are more fluid than we think. Robots are blending the me and you into the it.

Heidegger was one of the first to describe technology as an invisible extension of our identity. Heidegger’s hammer is ‘present’ when we look at it and think about it. But when the hammer is in the hand of a builder, then it becomes invisible. The hammer is ‘ready at hand’ when the builder thinks of building, not hammering or the hammer. The tool is well known and the focus is on the task instead. The hammer becomes an extension of our identity, an expression of our intent in the world.

Our technological extensions also augment our senses. A lady with feathers on her hat, as described by Merlau-Ponty, has enhanced her spatial awareness. She has increased her sense of the whereabouts of walls and doorways. Just like the whiskers on a cat, we are augmenting our world with technological whiskers.

Similarly, as technology acts as an extension of others, or a proxy, it also becomes invisible to us. Telepresence robots offer an illusion of real presence and become transparent as technologies. Our focus shifts from the tool to the task. In  this case the task is the social interaction. Suitable Technologies even prefer that we don’t call their telepresence devices robots because they want our focus to be on the experience not the device.

Robots are becoming popular and as more of them enter our world, they bring their very own personalities and appearances. But any device with a screen, or speakers and connectivity, is capable of being a gateway for many other people. We can have relationships that are indirect, asynchronous and at scale. Our relationships can be with you, me and it and many mixtures in between.

We are going to see more and more social robots in the service industry, including health, manufacturing and logistics, and in the consumer end, including the home, retail and hospitality. And we are just starting to understand the scope of this blended technological reality with robots.


People enjoy meeting Savioke’s Relay, the robot butler now at 4 hotel chains in California. You can communicate with Relay, although the robot behaves more like R2D2 than C3PO. Relay is functional too. Relay is designed to deliver small items to guest rooms when the front desk staff are busy.

After collecting a lot of feedback, Savioke find that as well as people enjoying their communication with Relay, they also appreciate not having to communicate with a person at a time when they are not feeling social, ie. late at night. The robot starts to become an extension of their wishes, but still has just enough personality to improve the experience.


A robot like Mabu from Catalia Health is acting as a proxy for a doctor or primary health care physician. Mabu will stay in the home of patients on a specialty pharma treatment where Mabu’s AI engages the patient directly in conversation and it’s only the data that is communicated to the doctor. And while Mabu the robot may sit at home, Mabu the app can travel with the patient anywhere.

And Fellow Robots OSHBot is really mixing all our relationships up. OSHBot can act as a simple extension. When you enter the hardware store you can ask the robot for directions and then simply follow the map. Or the robot can autonomously guide you to the correct location inside the store. You can engage the robot in conversations about the parts you’re looking for.


Robots are great at remembering 10,000s of SKUs and where on the shelves they all are. But people are really great at problem solving and understanding complex communications. So if you ask questions like “What sort of glue should I use on a roof tile like…”, then OSHBot can call an expert in for a video call with you. So you can be talking to both the robot and another person.

For the customer, this is just a great shopping experience. But this could change the nature of daily work for the store associates, leaving them free to focus on solving the things they enjoy, with their social and expert knowledge, rather than walking miles of aisles, tracking thousands of small items.

So robots are really augmenting our reality in a multitude of ways. Robots are the embodiment of information. And in our new blended reality, they extend and augment our senses, they are the proxies or avatars for others. And they also have their very own alien identity.

As a child, I wanted to be an astronaut, to explore the universe and to meet aliens, but it turns out the aliens are here, and they can teach us a lot about what it is to be human. In research areas from neuroscience, to biomechanics and psychology, we’re using robots to better understand humans.


Ex Machina: What happens when Turing meets Bechdel test


Alex Garland’s first feature film as a director, Ex Machina, had its US debut at SxSW on March 14. This stylish idea film explores the Turing Test in a very Pinteresque fashion as a young coder falls in love with an advanced AI. Ex Machina is beautifully framed, but Garland’s stark script succeeds on the strength of the acting from Domhnall Gleeson, Alicia Vikander and Oscar Isaac.


Garland’s writing career launched in 1997 with the best selling novel “The Beach”, which the Times called the Gen X answer to Lord of the Flies. After a string of cult successes like 28 Days Later, Sunshine, Dredd and drafts of Halo and Logans Run, Garland became fascinated with the emerging promise and perils of AI. In the Q&A following the SxSW screening, Garland talked about feeling a zeitgeist, a technological and cultural turning point, compelling him and other film makers and writers to address robots and artificial intelligence.

Although he says he’s on the side of the robots, it’s an uneasy truce. Garland describes his film as the story of ‘two brains torturing each other’. That’s true. In Ex Machina, Tony Stark meets John Searle in a gripping drawing room theater, when a billionaire tech genius recruits a young coder to administer the Turing Test to his secret advanced embodied AI.

And it’s a stark film, there are only 4 characters; 2 men and 2 women. 2 AIs and 2 humans. And which two are the brains? That is supposed to be uncertain, but anyone who has used the Bechdel Test to analyze films or popular culture for gender issues knows exactly where the ‘brains’ are.

The Bechdel Test started as a gender litmus and has become a remarkably useful indicator of power imbalance. The test is named after Amy Bechdel a cartoonist who outlined the rules in a 1985 cartoon. To pass the Bechdel Test, a film has to have two women in it, who talk to each other, about something other than a man.

Sometimes the proviso is added that the women have to have names, because some films can have many women characters, but if the characters are all “girl at checkout” and “girl with gun” then they are just devices to add color or move the action forward. And of course, possession of a name is an important indicator of personhood, or identity awareness, so it’s always one of the first steps to separate the beings from the machines.

Many films seem at first glance to have badass female characters but when put to the Bechdel Test, it becomes clear that they never talk to anyone but the main man, or if they talk to each other, it’s about the main male characters. So really, they have no interiority, no self awareness and are probably going to fail a Turing Test. That’s where I think it would be very interesting if the Turing Test were to meet the Bechdel Test more often.

Garland is also playing games with gender and the alienness of AI in Ex Machina. There is a beautiful scene where Ava, the AI, performs a reverse strip tease, putting on her human body.

But I’m afraid that Ex Machina falls at the final fence, as does just about every other science fiction film I’ve ever seen, aside from Alien. The Bechdel Test is useful for more than examining gender representation. It can be our Turing Test for creating believable alien or artificial life forms. If you look at our filmic or cultural representations of the other or alien, then you have to be struck by the singular nature of them all. From Frankenstein to Big Hero 6, do they have any reality without the central human characters?

No, they are alone. Even Alien is alone. At least in Frankenstein, it is the utter aloneness of the new form that is the whole story. Films that have pushed the envelope are few. And doing a quick mental check, the was left feeling empathy for the ‘others’ in only a couple, like Westworld, BladeRunner and Planet of the Apes, and the books of writers like Brin and Cherryh.

How believable are our ‘other’ AIs and robots? Brad Templeton said that an autonomous vehicle isn’t autonomous until we tell it to go to the office and it decides to go to the beach instead. A life outside of our anthropomorphic story is what’s missing from our AIs, aliens and others. We don’t really care about them or their lives outside of their impact on our own. And this makes us poorer.

The final shot is a haunting homage to Plato’s Cave’ although Garland credits his Director of Photography entirely for it. In The Republic, Plato posed the question, what if humans were born chained to face a cave wall seeing the world only as the shadows passing in front of a fire behind them in the mouth of the cave. Imagine the difference when you see the world, unchained from the cave.

I can’t say more. Go see Ex Machina. And use the Bechdel Test on everything.

Avatars and Social User Interfaces


How do we talk to our machines in the 21st century? From typing to swiping, 20th century interfaces translated the world of switches, gears and punch cards into a language that anyone, from the smallest toddler, could speak.

In the 21st century, the graphical user interface is gaving way to a social user interface. We have multiple devices with inputs ranging from vision, words, speech, touch, gesture, even emotion. While humans have adapted to communicating with devices in very ‘dumb’ ways, we are on the verge of much smarter interfaces, which can make sense of a combination of inputs and even understand the context. The GUI becomes the SUI, a whole social way of interacting with our machines.

Avatars are our forcefeedback loop. We have already started interacting with voice avatars. The next step is visual avatars that include speech, face, body and even simulated emotion. The avatar is translating machines into human language/expression and avatars will be the ‘face’ of our new social user interfaces.

What is already happening today? What are the problems? What are the possibilities?

Robots, humans and the need for science super communicators

Alien Leaders

 “Take me to your leader.” Complex technologies require multidisciplinary thinkers, who can bridge knowledge silos and convey nuanced science and technology messages effectively. We don’t educate for these qualities nor do we reward them in employment. Where do we turn for policy advice in an emerging area that spans multiple domains?  Where do we find our next generation of technology leaders?

These are probably the most significant unrecognized problems the world faces in future proofing for technological change. The 1953 New Yorker cartoon that started the “Take me to your leader” meme shows two aliens newly arrived on earth asking a donkey to, effectively, give them policy guidance. This is exactly what our ‘brave new’ human-robot world looks like. 

 The recent widely reported Pew Research Center Report on AI, Robotics and the Future of Jobs highlights the ridiculousness of the situation. The report canvassed more than 12,000 experts sourced from previous reports, targeted list serves and subscribers to Pew’s research, who are largely professional technology strategists. 8 broad questions were presented, covering various technology trends. 1,896 experts and members of the interested public responded to the question on AI and robotics. 

 The problem is that very few of the respondents have more than a glancing knowledge of robotics. To anyone in robotics, the absence of people with expertise in robotics and AI is glaringly obvious. While there are certainly insightful people and opinions in the report, the net weight of this report is worthless, particularly as findings are reduced to executive summary level comments such as;

 “Half of these experts (48%) envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers – with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.”

 These findings could have come from any old vox pop. However, the Pew Research Center is well respected and considered relevant. The center is a non-partisan organization which provides all findings freely “to inform the public, the press and policy makers”, not just on the internet and future of technology, but on religion, science, health, even the impact of the World Cup.

 How do you find the right sort of information to inform policy and public opinion about robotics? And where do you find technology leaders for an emerging multidisciplinary field? 

Improving the quality of public policy

 Papers like Heather Knight’s “How Humans Respond to Robots” for the Brookings Institute and organizations like Robohub and the Robots Association, are good examples of initiatives that improve public policy debate. At one of the spectrum, an established policy organization is sourcing from established robotics experts. At the other end, a peer group of robotics experts is providing open access to the latest research and opinions within robotics and AI, including exploring ethical and economic issues.

Reports – Building public policy through good design

 Heather Knight’s report “How Humans Respond to Robots: Building Public Policy through Good Design” for the Brookings Institute is a good example of getting it right. The Brookings Institute is one of the oldest and most influential think tanks in the world, founded in Washington D.C. in 1916. The Brookings Institute is non-partisan and generally regarded as centrist in agenda. Although based in the US, the institute has global coverage and attracts funding from both philanthropic and government sources including, the govts of the US, UK, Japan, and China. It is the most frequently cited think tank in the world.

 Heather Knight is conducting doctoral research at CMU’s Robotics Institute in human-robot interaction. She has worked at NASA JPL and Aldebaran Robotics, she cofounded the Robot Film Festival and she is an alumnus of the Personal Robots Group at MIT. She has degrees in Electrical Engineering, Computer Science and Mechanical Engineering. Here you have a person well anchored in robotics with a broad grasp of the issues, who has prepared an overview on social robotics and robot/society interaction. This report is a great example of public policy through good design, if it does indeed makes its way into the hands of people who could use it.

 As Knight explains, “Human cultural response to robots has policy implications. Policy affects what we will and will not let robots do. It affects where we insist on human primacy and what sort of decisions we will delegate to machines.”  Automation, AI and robotics is entering the world of human-robot collaboration and we need to support and complement the full spectrum of human objectives.

 Knight’s goal was not to be specific about policy but rather to sketch out the range of choices we currently face in robotics design and how they will affect future policy questions, and she provides many anecdotes and examples, where thinking about “smart social design now, may help us navigate public policy considerations in the future.”

 Summary: “How Humans Respond to Robots”

 Brookings Report

Firstly, people require very little prompting to treat machines or personas as having agency. Film animators have long understood just how simple it is to turn squiggles on the screen into expressive characters in our minds and eyes. We are neurologically coded to follow motion and to interpret things as social or intentional actions. This has implications for human relationships, to the extent that studies show that we bond with devices and can also enjoy taking orders from them.

 There is also the impact of the uncanny valley, a term which describes the cognitive dissonance created when something is almost, but not quite, human. This is still a fluid, and far from well understood, effect but it foreshadows our need for familiarity, codes or conventions around human-robot interactions. Film animators created a vocabulary of tricks that create the illusion of emotion. So too robot designers are developing tropes of sounds, colors, and prompts that may borrow from other devices like traffic lights or from popular culture, to help robots convey their intentions to people.

 And Knight draws attention to the fallacy of generalization across cultures, with regard to our response to robots. Most HRI studies also show that we have very different responses along other axes, such as gender, age, experience, engagement etc. regardless of culture.

 Similarly, our general responses have undergone significant change as we’ve adapted to precursor technologies such as computers, the internet and mobile phones. Our willingness to involve computers and machines in our personal lives seems immense, but raises the issues of privacy and also social isolation as well as the more benign prospects of utility, therapy and companionship.

 As well as perhaps regulating or monitoring the uses of AI, automation and robots do we need to be proactive in considering the rights of machines? Or at least in considering conventions for their treatment? Ethicists are doing the important job of raising these issues, ranging from what choices an autonomous vehicle should make if faced with two options both involving human injury, or if we should ‘protect’ machines in order to protect our social covenants with real beings.

 This suggests that by default we should create more machines that are machine like, that telegraph their constraints and behaviors, rather than impulsively seek to anthropomorphize and personalize our devices.

 Knight outlines a human-robot partnership framework across three categories; Telepresence Robots, Collaborative Robots and Autonomous Vehicles. A telepresence robot is comparatively transparent, acting as a proxy for a person, who provides the high level control. A collaborative robot may be working directly with someone (as in robot surgery) or be working on command but interacting autonomously with other people (ie. delivery robot). An autonomous vehicle extends the previous scenarios and may be able to operate at distance or respond directly to the driver, pilot or passenger. The ratio of shared autonomy is shifting towards the robot. 

 The challenge is to create patterns of interaction that minimize friction and maximize transparency, utility and social good. In conclusion, Knight calls for designers to better understand human culture and practices in order to frame issues for policy makers. 

 Robohub – Using 21st century tools for science communication


 In the 21st century, science has access to a whole new toolbox of communications. Where 19th century science was presented as theater, in the form of public lectures and demonstrations, 20th century science grew an entire business of showcases, primarily conferences and journals. New communication mediums are now disrupting established science communication. 

 There is an increasing expectation that science can be turned into a top 500 Youtube channel, like Minute Physics, or an award winning twitter account, like Neil De Grasse Tyson’s @neiltyson which has 2.34 million followers. We are witnessing the rise of MOOCs (multi person open online courses) like the Khan Academy, and Open Access journals, like PLOS, the Public Library of Science.

 Berkeley University has just appointed a ‘wikipedian-in-residence’, Kevin Gorman. The ‘wikiepedian-in-residency’ initiative started with museums, libraries and galleries, making information about artifacts and exhibits available to the broader public. This is a first however for a university and the goal is twofold; to extend public access to research that is usually behind paywalls or simply obscure; and to improve the writing, researching and publishing skills of students. Students are encouraged to find gaps in wikipedia and fill them, with reference to existing research. 

 In between individual experts and global knowledge banks, there is space for curated niche content. Robohub is one of the sites that I think can play an integral role in both shaping the quality of debate in robotics and expanding the science communication toolbox. (Yes, I’m deeply involved in the site, so am certainly biased. But the increasing number of experts who are giving their time voluntarily to our site, and the rising web traffic, give weight to my assertions.)

 Robohub had its inception in 2008, with the birth of the Robots Podcast, a biweekly feature on a range of robotics topics, now numbering more than 150. As the number of podcasts and contributors grew, the non-profit Robots Association was formed to provide an umbrella group tasked with spinning off new forms of science communication, sharing robotics research and information across the sector, across the globe and to the public. 

 Robohub is an online news site with high quality content, more than 140 contributors and 65,000 unique visitors per month. Content ranges from one off stories about robotics research or business, to ongoing lecture series and micro lectures, to inviting debate about robotics issues, like the ‘Robotics by Invitation’ panels and the Roboethics polls. There are other initiatives in development including report production, research video dissemination and being a hub for robotics jobs, crowd funding campaigns, research papers and conference information.

 In lieu of a global robotics policy think tank, organizations like Robohub can do service by developing a range of broad policy reports, or by providing public access to a curated selection of articles, experts and reports. 

 Improving technology leadership in a multidisciplinary field

 Robot n Women

 As the size and scope of the sciences expand, the ability of individuals to map the world has disappeared. In the Renaissance, all of human knowledge could fit into one library, one university. There are now thousands of universities and millions of books. Science has been divided into major fields and hundreds of subfields.  An emerging field such as robotics draws from expertise in a range of other more established fields, but has few of its own named traditions, courses or experts. 

 How does this impact on the evolution of the field? Firstly, sourcing good policy guidance is difficult as it is hard for outsiders to know where robotics experts are. At an individual level, career trajectories are not well understood by others. A roboticist may work on mechanical automation, then on household appliances, then on interactions, then on software or on sensors, each in a different market area. Large organizations may have the resources and the innovation culture required to work across silos, but smaller organizations, particularly startups are locked out. 

 Startups, which are often seen as being innovative are actually poor at systematic innovation, just very good at executing on a unique idea. Startups, like robotics and some other areas of science, politics and finance are remarkably homogenous and conservative in makeup. This is both a problem for women and other minorities in tech, and an opportunity for organizations to proactively bring diverse thinkers on board as a systematic way to engage more deeply with different domains, to ‘speak in different languages’, and both reap the innovation benefits of multidisciplinarity and to meet the burgeoning skills shortages.

 Silos and polymaths

 Mech Head

 In 1802, Thomas Young delivered a series of 50 lectures to the Royal Institution in London. A physician and scientist, Young’s lectures covered everything from disproving Newton’s theory of light to translating the Rosetta Stone. He touched on all major fields of science and it is posited that he is the last real polymath, or ‘Renaissance Man’. These days it is not possible to grasp a field in only a few years study, let alone do justice to all of them.

 In 1956, CP Snow published an essay called ‘The Two Cultures’,  about the schism between intellectuals and scientists, between the arts and the sciences. While Snow may have been disproved over last 50 years on his conclusion about the moral health of the scientists, and their ‘staunch heterosexuality’, vs the intellectuals and their effete culture, his thoughts on the need to understand the basic tools or dialogues of both sides of the chasm, particularly with regards to robotics, are prescient even if still in the realm of science fiction.

 “It is more justifiable to say that those without any scientific understanding miss a whole body of experience: they are rather like the tone deaf, from whom all musical experience is cut off and who have to get on without it. The intellectual invasions of science are, however, penetratingly deeper. Psycho-analysis once looked like a deep invasion, but that was a false alarm; cybernetics may turn out to be the real thing, driving down into the problems of will and cause and motive. If so, those who do not understand the method will not understand the depths of their own cultures.”

 And it isn’t just that science is needed to understand culture, but that culture is how science is practiced and disseminated. It’s a fallacy to think that being human is sufficient to understanding human cultural, social or economic practices, particularly in a systematic fashion. 

 Snow’s essay anchored his book, “The Two Cultures and the Scientific Revolution” in which he posits that knowledge is operating in ‘silos’ and that even scientists and engineers were also speaking in mutually incomprehensible languages, a growing problem for society. For more than 50 years, Snow’s thesis has been widely debated and solutions sought, but as many point out, the very people who seek solutions are at the same time caught up in all the apparatus of the institutions creating the problem. Universities are amongst the worst for talking the talk, not walking the walk. Multi- and inter-disciplinarity have become buzzwords rather than real practices.

 When I studied what I call human-robot culture, but technically was called ‘digital cultures’ alongside the ‘social robotics group’, my very small academic unit was based in the Humanities on one side of the campus and partnered with a very small unit based in Engineering on the other side. The campus was divided by the Pacific Highway, Australia’s oldest and largest arterial roadway. We were tenuously connected by a slim aerial footbridge. It frequently struck me that this orthogonal layout was a direct metaphor for the organization of the robotics field. 

 Not only was each group a silo isolated within its parent disciplines, digital cultures within film and social robotics within the field robotics center, but neither were integrated into the broader robotics field. And the academic world was oddly shut off from the mainstream world of commerce, employment and general life that streamed along the highway in our midst. (I have been trying to make the cross disciplinary connections stronger ever since, but fortunately  I am no longer striving for academic success.)

 Robotics needs to work across silos. The discipline involves mechanical and electrical engineering, control systems and cybernetics, computer science, artificial intelligence, psychology, design, and to effectively apply new technology in the world, interaction specialists and business specialists. What’s often missing from the mix? Polymaths. Super communicators who can translate the languages of each silo and go beyond, to get the big picture.

Robotics is still an emerging field, a technology growing in complexity, applied from an increasing number of disciplines. There are parallels in architecture. Architects are essentially the people in command of any built structure, no matter how complex and how many other disciplines are involved in the planning and construction. As such, they are highly trained in the social context as well as the technical constraints. Architecture has evolved from the building profession as the complexity of requirements increased. Whereas robotics, with comparatively few applications in the real world, risks being a tail wagging the dog. 

 Robotics is one of the worst cases of ‘the two cultures’ syndrome, and will remain an appendage until we build robust models of how people want to live alongside smart machines. As well as good technology leadership, we need a feedback loop for input from the rest of the world.

 Explaining the rest of the world to the people inside the building

 Div Future

 Large organizations have the ability to onboard ’thinkers’ who can interpret the rest of the world in a framework of ethics or policy, and this fits into a broader innovation culture. This can contribute to competitive commercial advantage but it entails a commitment to looking further ahead and afield than is usually corporately comfortable. in 1998, Intel lured Genevieve Bell away from her research at Stanford as a cultural anthropologist to, as Bell puts it, “explain the rest of the world to the people inside the building”.

This can have the complementary effect of also helping to translate the technology story into the languages of the rest of the world. As well as being named to several ‘top people in technology’ lists, Bell is also “Thinker in Residence” for South Australia. That’s a job title that conveys, “we don’t know where you’re going but we know someone needs to fill in the gaps on our roadmap”. Since her original work in the Digital Home Group, Bell has gone on to lead Intel’s new Interaction and Experience Research Group. 

We can see some fruits of Intel’s commitment to finding new places on the product road map in Jimmy, the 21st Century Robot. Just released by Intel Labs, Jimmy is an open source, 3D printable robot that be customized, personified and have applications developed to suit. This robot is clearly designed to engage with the maker movement, with changing methods of technology production and scope, reaching new communities for robotics. 

“By inviting the public to participate in the evolution of robots and fast-tracking innovation, the number of possibilities increase exponentially and hasten the reality of new developments in areas like healthcare, public transportation, and other sectors that can vastly improve all our lives.”

Intel is one of a number of large corporations making a virtue of thinking outside the box, fostering diversity and internal innovation and driving social change alongside of their technologies. On the whole this is considered as harnessing ‘entrepreneurial’ thinking rather than traditional business growth thinking, with Amazon and Salesforce being great examples of companies topping the innovation metrics (see Forbes lists 

Traditional methods work well for well understood problems, but when faced with great uncertainty, entrepreneurial methods are most useful. This is lean startup methodology, also known as ‘The Innovator’s Method’ via Jeff Dyer, Professor of Strategy at BYU, and Nathan Furr, Professor of Entrepreneurship at BYU, who have developed the metrics behind the Forbes lists as part of a 10 year study.

“The common themes we found were that most had adopted some type of “idea management system” to capture insights that could be turned into innovations. We also discovered that most of the companies had developed deep expertise in principles like those described by design thinking principles (e.g., techniques used to deeply understand customer needs), lean start-up principles (e.g., techniques to rapidly experiment and test prototype solutions to those customer needs), or both. They also applied those same experimentation principles to test different elements of their business model to take their solution to market. “

 The first two critical steps in the Innovator’s Method are to acquire insight and then to discover deeply what the problem is, who it affects and how. Only then do you build, measure and repeat until you have a business model. Customer discovery is built upon a foundation of surprise, capitalizing on unappreciated information.

One of the questions that Bell has tried to answer at Intel is “Where are all the women?” and “What do they want?”. Robotics is an applied science, a technology. It is deeply implicated in the real world and so the question of “what good is this technology?” is very important at every level including funding its development. It makes practical sense to invoke as much diversity and deep domain knowledge as possible, by organizational commitment to seeking it outside, through the innovation or customer development methods described, and this requires an organization with technology leadership capable of an anthropological appreciation of difference and practice at speaking across silos. 

Regrettably, we do not educate for, nor reward, people with polymath tendencies. 

Circular careers and super communicators

 It’s said that women’s careers follow a circular path, whereas men’s follow a straighter line. This ‘problem’ for women seems to match the ‘problem’ of finding broad technology leadership for robotics. This circular tendency is in many cases an ideal multidisciplinary training ground.

And lack of leadership is an increasing problem for robotics as the number of robotics applications and companies grows. As an emerging field, robotics does not have a deep pool of experienced talent looking for new challenges. Rich Mahoney from SRI International and Chris Moehle from NREC are both charged with overseeing the commercialization and ‘spinning off’ of robotics ventures and identify this issue as their biggest problem. 

This persists in spite of the increasing numbers of joint MBA/engineering programs being offered by Stanford, MIT, CMU, Cornell, Rice etc. and the increasing number of business accelerators at universities. Entrepreneurial experience and insight into a real world issue are still required. Historically, some places have punched well above their weight in terms of producing the next generation of technology leaders. For example, Vytas Sunspiral is a multidisciplinary thinker, a senior researcher at NASA’s Intelligent Robots Group and a graduate of Stanford’s Symbolic Systems major, which spanned Philosophy, Psychology, Linguistics and Computers Science. The alumni roll includes Reid Hoffman and Marissa Mayer. 

As Rodney Brooks says, ‘these days robotics isn’t a technology problem, it’s a business model problem’. And now robotics is attracting investors, but where are all the viable startups? It takes a polymath or science super communicator to make the transition from business to technology to real world application and back again.

Sunspiral believes that multidisciplinary programs are essential for robotics innovation and leadership. The difficulty is that we identify these nexus in hindsight, rather than at the time, making them hard to do by design. So it’s only logical to look at other ways of engaging people with the same qualities. As a generalization, women tend to go towards robotics that is highly applied in the real world. This is one of the characteristics needed for a new generation of robotics business and technology leaders.

women science

There is also a good supply of women tangential to the field and able to be deployed. In spite of decades of incentives and measures aimed at getting women into the STEM ‘pipeline’, women continue to leave at a much higher rate than men. Women with degrees in STEM subjects are not proportionately represented at the higher levels, of either academic tenure track or career practitioners.

It’s well documented that women tend to move sideways, out of science and into education, communication and lower management positions. This is frequently put down to demands of family/life balance and a strong gender based desire to help. Recent studies by Boyce and Kitzinger call this leaving ‘bench science’ as a career path and they start to frame ways in which this could become a positive move, utilizing women’s strengths as professional science communicators. 

I see this as extending beyond a need for science communicators but also a way of meeting the need for technology leaders, entrepreneurs, CTOs or thinkers; people who can understand the broader social ecosystem surrounding technologies; people who care about the purpose for a technology; people who are interested in application; and people with experience in a broad range of work areas and skills; and people who communicate complex technologies well.

There is an opportunity here for organizations to systematically bring forward women as leaders, changing the popular perception of a circular career path from a negative into that of a positive accomplishment, a multidisciplinary mindset. 

In conclusion 

It’s money ball. For every problem there is an advantage to be seized. Systematically there is an opportunity for us to educate, fund and organizationally embrace multidisciplinary thinkers and communicators by developing better metrics and recognizing skills and qualifications that are not linear. This is an imperative for startups, spinoffs and commercialization. And this is an opportunity that women, given the right framework, are well positioned to fill. 

Useful links