2021-11-18 8 min read

Don't Kick the Robot

Don't Kick the Robot
Spot Classic via Boston Dynamics.

A conversation with Kate Darling on our future with robots

Dr. Kate Darling is a researcher at the MIT Media Lab and an expert in robot ethics who wants to shift the way we think about robots. Her work - and her new book, The New Breed - investigates social robotics and human-robot interaction, pushing back against fears that robots will automate human jobs and replace friendships.

Scope of Work's Members’ reading group recently read The New Breed, and invited Kate for an hour-long conversation about the book. What follows is an edited and condensed transcript of our discussion, which included Kate, Scope of Work's team, and other Members of Scope of Work.


Hillary Predko: Manufacturing is largely task-based, making it easier to introduce automation compared to other industries. From Unimate, the first industrial robot in the 1950s, to Tesla’s highly automated Fremont factory, robots have a long history in manufacturing. Fully automated factories remain an elusive goal, but fears around robots replacing jobs are persistent. How can we think about automation and manufacturing more holistically?

Kate Darling: I've been all over the world over the past 10 years talking to people from different walks of life and different industries about robotics. It's always struck me that we have this constant comparison between robots and humans - it feeds into all of our conversations and expectations about our futures with robots. We're constantly trying to recreate something that we already have rather than thinking more creatively about these technologies and what they can supplement.

I wrote about the Tesla story in the book, and I love that we seem to think robots are at the precipice of taking over everyone's jobs, yet we struggle to automate the most predictable, well-defined scenarios. If a screw falls on the floor, people understand how to adapt to that situation while a machine does not. It's funny how little we value human intelligence in that way.

We have this technological determinism about robots coming in to automate human labor, but we should think more creatively about what humans are good at, what robots are good at, and how they can actually work together. This could increase productivity, and let people have more fulfilling, safer jobs doing work that isn’t dull, dirty, or dangerous. Of course, we need to retrain people who work in disrupted industries or we could end up creating a different type of dystopia, but the idea of wholesale replacing people with robots doesn't seem to be panning out.

Kukas building Teslas via Cleantechnica
Spencer Wright: Looking back at the past 50 years of robots in manufacturing, the impressive cases of automation have largely removed the possibility for face-to-face interaction between humans and robots - the robots are completely caged off. Is it even coherent to think of robotics as a single category that encompasses caged off industrial equipment as well as, say, companion robots for seniors?

KD: As you said, a lot of robots in factories have been caged off for safety. That's gradually changing, as people develop robots that can work together with humans on the factory floor. But when it comes to the conceptual framework, the answer really depends on why we're asking. Robotics spans a huge range of technologies, with all kinds of edge cases. There are huge differences between a robotic arm working in a factory and Amazon's Astro rolling around your home with its little face. The function is different, the way people relate to it is different, so in some ways, it's a different technology. So should we call them something different?

Again, it depends on why you're asking. We have these weird concepts in our minds of what robots are - and generally, these concepts are inconsistent and largely driven by pop culture sci-fi. Of course, these concepts change over time. We might think of a robot as something that automates a task, but once it becomes mainstream we call it something mundane, like a dishwasher. Our concept of robots has to do with novelty and excitement and is informed by these weird biases.

One big change in robotics in the last decade is that with advances in machine learning, robots are moving into the real world before they’re ready for prime time. The shortfall of machine learning is that it requires training on real-world data - cars are a big example. Self-driving cars aren't ready and they're only going to get ready by driving 100 million miles on our streets. There's no way to get a self-driving car ready to go in the factory, unlike the robotic arms of years past. This is a paradigm shift and societally we’re debating what the right balance is between progress and safety.

One of the main challenges we’re facing is this super long tail of unexpected situations. As we see robots come into the real world, rather than this confined factory context where we can completely control them, companies haven't necessarily taken into account some of the unexpected things that happen. With the robots in our grocery stores, a little team of eight engineers didn't necessarily anticipate that people would call it Mary, give it googly eyes, or think it's creepy and is watching them.

Soviet military dog training school, 1931 via Wikipedia
HP: In the book you share unexpected stories about human/animal partnerships, like Soviet anti-tank dogs, to exemplify the diverse ways we could think about working in partnership with robots. Can you speak to the human drive to anthropomorphize both robots and animals, and how thinking about our history with animals can inform how we think about robots?

KD: The point isn't to say that robots and animals are the same, but to explore how we've been able to use the physical and sensory abilities of animals to extend our own abilities and partner with them. This comparison helps open people's minds to new possibilities for robots - whether that's for transportation, delivering things like carrier pigeons, looking for things underwater like trained dolphins, or for autonomous weapons - there are rumors that these trained dolphins were outfitted with weapons. I think this robot-as-animal analogy illustrates that some of the concerns we have about robots may not be such a big deal and that there are other concerns that we might need to pay more attention to like consumer protection.

For instance, in the book I talk about how there's a lot of fear that robots will replace our social relationships as they get more advanced - fears people will replace their partners with sex robots, and all these scenarios of human replacement. That seems absurd when instead of comparing robots to humans, you compare them to pets. As dogs became more a part of the American family back in the 60s and 70s there might have been some psychologists who had concerns about pets but no one today believes that pets are a problem or that they replace human relationships. In fact, we view them as a very healthy and good supplement to our human relationships.

James Coleman: I was broadly compelled by the central idea in your book, that we should apply mental models from our history with animals to robots. One major difference, however, is our ability to make rapid changes in the development of a robot as compared to the slow changes in animal genetics. There's a natural speed at which we can shift species development, but someone could create a robotic salesman that's perfectly designed to target me pretty quickly.

KD: That's absolutely right - maybe I should have touched on it more in the book. With the pace of technology and with major companies having access to so much data, they have these giant testing grounds for development (and don't need ethics approval, for the most part, to test on consumers). In terms of how to manipulate people, it takes a long time to change the size of the eyes on a dog through breeding, whereas with a robot, you can just change it. That's an area where we need to be much more careful than we have with animals - there are limits on what animals can persuade you to do, whereas, with robots, I see no future in which companies don't try to use their access to individuals to their advantage and the detriment of certain parts of the population. I think we need to be really careful. It’s worth noting that while you can do things much faster with physical robots, there are still more limitations than people may think.

Lars and the Real Girl, via IMDB

There are many ways that the emotional connection to our robots could be manipulated by the people who are selling or programming them and I think these concerns should get more attention. Unlike animals, robots can be programmed to serve specific interests, beyond the direct user. I'm planning a study right now looking at social robot persuasiveness because I am interested in consumer protection issues. We have solid regulations around kids and other vulnerable parts of the population - there are laws limiting how products can be advertised to children for example, and as soon as a parent doesn't like a technology, you can be sure it will hit the media. My question is: do we need to be thinking about consumer protection regulations for adults as well? These products could potentially be persuasive at a level that approaches subliminal advertising, and I think we currently underestimate the social bonds that adults will forge with robots or virtual characters. We’re getting into this really interesting messy realm of persuasion where we have to ask, “Are people really able to make their own decisions, or do we need to intervene?”

SW: You write at length about the many absurd ways that humans have assigned our idiosyncratic ethical codes to animals, for instance putting populations of rats on trial. You also give quite a bit of credence to the idea that humans have an ethical responsibility to robots. I wonder where you end up there - what do you think our ethical relationship with robots would be in the coming decades?

KD: On the one hand, I'm like, “Look how our anthropomorphism has led us astray!” and on the other hand, I'm like, “Okay, but maybe we shouldn't recreate Westworld.” The rules we create for robot ethics need to be intentional, evidence-based, and make sense. That is part of the problem with putting animals on trial, which I use to illustrate the absurdity of companies saying, “Oh, it was the algorithm’s fault.” If we compare algorithms to animals we've gone back to this age where we have put pigs on trials for the crimes they've committed, rather than holding the humans who are in charge of the pigs accountable. I think that that's a very important piece; what I want is rules that ensure humans are held accountable.

Similarly, in the case of our ethical treatment of robots, it's important to acknowledge where biases are, and then to say, “Okay, is it causing harm if we let people behave violently towards robots? Is it desensitizing them to violence in other contexts?” In some cases, that may look like being nice to robots, if the evidence shows that hurting robots trains people's cruelty muscles. But we don't currently have that evidence - more research is needed.

We're going to find ways of folding robots into all aspects of our society as either tools, products, or companions, and they fall into this weird ontologically ambiguous category that we've had to figure out with animals. And honestly, conversations about animal rights continue to be messy. There's always been the argument that animals experience pain differently or it's fundamentally different from our experience, and we've used that to justify not extending rights to them. And I think this illustrates just how messy our discussions around robot rights will be.


Thanks to Kate for joining the conversation, and to James, Jason, Carl, Aaron, Arushi, and Victoria for bringing a whole slew of other excellent questions during our discussion. Starting 2021-12-03, we’ll be reading How Buildings Learn by Stewart Brand. Join as a Member today and chat with Stewart when we wrap the next book.

Hillary Predko
Hillary Predko
Hillary Predko is an interdisciplinary artist and researcher who works across the boundaries of craft and computation with a penchant for trash.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Scope of Work.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.