Policy is Just Hard Science Fiction: An interview with Annalee Newitz

Policy is Just Hard Science Fiction: An interview with Annalee Newitz

An interview with Annalee Newitz exploring labour rights and working conditions, anthropomorphisation of AI, and a cultural obsession with productivity.
​Sarah Villeneuve
Alumni, Policy Analyst
July 17, 2020
Print Page

Annalee Newitz is an author and journalist whose work covers topics related to emerging technology, science, and culture. They have authored a variety of non-fiction books, fictional novels, and have published scientific journalism for WIREDThe New York Timesand The New Scientist. In 2008, Newitz founded io9, an online media outlet focused on topics related to technology, science fiction, futurism, fantasy. Prior to this, they served as the editor-in-chief of Gizmodo, and worked for the Electronic Frontier Foundation.

Set in the year 2144, Annalee’s first novel Autonomous illustrates a future in which humans and AI-robots live and work alongside one another, human rights have been replaced by citizenship contracts with sponsoring corporations, and tensions are high between anti-patent activists and the global pharmaceutical industry. The story follows Jack, a pharmaceutical researcher turned anti-patent scientist who pirates in-demand medication and distributes them to people in need through the black market. However, Jack is being closely tailed by two agents: Paladin, a brand new military-grade robot, and Elias, a human, both working for the International Property Coalition. As the agents close in on Jack, each character grapples with questions of humanity, sexuality, capitalism, and intelligence.

These questions are not so different from the ones currently being discussed by AI ethicists and policymakers. Should robots have rights? What is considered intelligence? Is it ethical to make human-like robots? Who owns original artistic and literary works autonomously generated by AI systems?

We were lucky enough to speak with Annalee Newitz about the anthropomorphisation of artificial intelligence, surveillance, robot rights, the role of universities and independent research labs, and the future of AI policy.

Share

"I'm really interested in the question of labour and what's going to happen to labour. And it seemed to me really obvious that indentured servitude would be something that would return to the mainstream, especially as labour gets devalued."

Annalee Newitz

Sarah: So some of the most interesting science fiction stories are the ones that result from an author’s passion for a research topic. I’m curious about what research questions you were considering throughout the writing process of this book, as well as in your current work?

Annalee: Oh, yeah, that’s a great question because I love doing fiction based on research. The funny thing about Autonomous is that the germ of the idea did not really come from research on automation, per se. I was working on a book, which came out several years ago now, about mass extinction (a happy topic), and I was visiting an earthquake simulation lab at UC Berkeley. What they do there is super interesting; they build a physical structure like a small house or a piece of a freeway, and then use a robotic arm to put pressure on it and deform it in ways that imitate what an earthquake would do, monitored by lots and lots of sensors. In a computer simulation they can add pieces to the structures. So they have a physical structure that the robot arms are deforming, but then they also have a simulated structure that is responding to the same forces in their software program. It’s this weird half simulation, half real life crushing of a giant structure. The robotic arms they use are quite big and bulky and they are attached to the wall while crushing a house. And I kept thinking about those robotic arms and wondering what they would evolve into eventually. And that is what turned into one of the first scenes in Autonomous, certainly the first one I wrote down, where the robot character named Paladin gets his arm shot off. It is dealing with what it means to lose an arm, what it means to get a new arm. I live in the San Francisco Bay area near Silicon Valley and I’m surrounded by people talking about automation and how automation is going to be used everywhere from writing sports articles to doing surgery. So I was thinking of all of that stuff in the back of my mind. I’m really interested in the question of labour and what’s going to happen to labour. And it seemed really obvious to me that indentured servitude would be something that would return to the mainstream, especially as labour gets devalued. All of that stuff got kind of mushed up together in my head and resulted in the story that you see now.

"I don't think robots think the same way people do, I mean an artificially intelligent robot. It would be a huge mistake to constantly assume that our automation has the same priorities that we do."

Annalee Newitz

Sarah: You brought up engine indentured servitude, which I will have some more questions about soon. But another question I have is: your book includes a number of robots that have bodies that resemble those of humans and I’m curious what your thoughts are on the anthropomorphisation of artificial intelligence and what kind of laws and regulations that this might encourage. 

Annalee: Huh, well I think that’s two separate questions, because the process of anthropomorphisation isn’t just a legal one, it’s psychological and can run the gamut from attributing human feelings or human desires or motivations to machines, that might actually have motivations and desires that wouldn’t necessarily be like human ones. They might have very different sets of motivations. The way I think about that, and that a lot of AI researchers now think about it, is that if we ever do get some sort of human-equivalent artificial intelligence, it would be a very non-neurotypical intelligence that might be very specialized. And in that way, it would be very human because humans have all different kinds of intelligence. But it would be different from our non-neurotypical intelligence or our specialized intelligence. I don’t think robots think the same way people do, I mean an artificially intelligent robot. It would be a huge mistake to constantly assume that our automation has the same priorities that we do. But as for laws, I think the anthropomorphisation urge is going to have some benefits, especially if you have robots that look like people, some people will want to protect those robots and give them something like human rights. In the same way that people have fought to give chimpanzees human rights. I think there’s always going to be that urge, which is a very pro-social urge. One of the things I really like about humans is that a lot of us do tend to want to extend the idea of humanity to many non-human life forms for the purpose of protection. Of course, there’s the other side too, where there’s going to be people who want to abuse and torture anything that looks like a person. So that’s the dark side that I can pretty much guarantee will happen alongside robot rights. I think it’s going to be weird if we actually are able to prove in some way that we have human-equivalent artificial intelligence, because there’s always going to be some people who will say ‘these are machines, they don’t deserve any kind of rights’. It’s really hard to prove that something is intelligent in the way a person is, so I suspect that it’ll lead to a lot of legal confusion and that some of the early laws will be property laws and won’t be about establishing human rights for robots, but more like establishing property rights, for whoever is responsible for the robot. If the robot kills someone, who is responsible? If the robot does something really great, who owns the intellectual property that the robot created? I think that’s where we’re going to start. And then maybe, if we’re lucky, we’ll wind up in a place where we’re actually thinking about human rights for robots.

Sarah: You can see some of that kind of playing out right now.

Annalee: It’s already happening for sure. I’m just stealing this idea from Ryan Calo, who has thought a lot about robot law.

"The mechanical bride idea has existed for a really long time. And it shows that, at least in the West, where we're developing these personalities for our machines, we still think of women as caretakers and servants, and we still think of women's voices as submissive and non-confrontational."

Annalee Newitz

Sarah: Building off of that, there’s a lot of discussion surrounding the gendering of artificial intelligence systems, regardless if they are embodied or disembodied, and we have a lot of examples of that today like Amazon Alexa and Siri, and I’m wondering what your thoughts are on this and how you see this impacting how the public interacts with current technology and the roles that we want AI to play in our lives.

Annalee: This is something that, as you said, a lot of people have already explored. And I feel like I have the very standard feminist response to the fact that so many AI’s or assistant-type things like Siri and Alexa, have been gendered female. It’s pretty clear what’s going on there. It’s basically a way of creating the mechanical bride. The mechanical bride idea has existed for a really long time. And it shows that, at least in the West, where we’re developing these personalities for our machines, we still think of women as caretakers and servants, and we still think of women’s voices as submissive and non-confrontational. And part of the thought that goes into designing something like Siri is ‘how do we get something that’s comforting, that people won’t be scared of’, because there is this concern that people will be disturbed by having a robot assistant or an AI assistant. So we’ll make it a girl and then everyone will understand how to use it. I think that’s just a basic problem with gender and it spills over into lots of areas of our lives, not just automation. 

On the other hand, one of the things that I’ve explored in Autonomous and other writing is that we don’t always project femininity onto our machines. Sometimes our machines are men, usually when they are war machines. And so I think, again, we’re going to just continue doing that. When I was writing Autonomous, my editor kept saying, ‘it’s 150 years in the future, there’s not going to be any homophobia’. And my reaction was that I think there will be. I think we’ve had homophobia for thousands of years in various shapes and formats, and misogyny and misandry where we treat men’s bodies as cannon fodder, which is also a really terrible thing that we do with gender. I don’t think we’re going to get away from that anytime soon. I think it’s improving, there’s little pockets of resistance to it, but we’re still going to be struggling with the same gender issues that we’ve been struggling with in the West, which is the culture that I’m most familiar with. For thousands of years we’ve been having these debates over what it means to be female and male and what roles do men and women play in society? As long as we are anthropomorphising our robots, we are going to project all that gender crap onto our robots. And so, you’re going to get someone like Paladin, the robot character in Autonomous who is really gender confused because robots probably won’t have the same relationship to gender that we do. And they also won’t have thousands of years of history of gender the way we do so. If that ever happens, if that future comes to pass, it will be really exciting to take gender studies classes with robots and see what they have to say.

Sarah: You already touched on human rights for robots or artificial beings. In the book you mentioned that human rights for artificial beings for human level or greater intelligence were developed in the 2050s. And I’m wondering, how would your story have been different if artificial intelligent beings didn’t have those rights?

Annalee: Part of my conceit in the book was that things would probably not be very different. This is partly backstory stuff that I did in my brain, but there’s definitely hints of it in the story, which is that there’s a Robot Rights movement that has happened sometime in the past, and robots gained certain rights. But at the same time, they are still subject to a lot of the same forms of abuse that they have always been. They can still be owned and in fact, the default understanding is that when a robot is built, it will be owned by whoever builds it for ten years. And as the robots say to each other, a lot of robots don’t survive ten years. So effectively they are still enslaved and they are still property. Unless they get very lucky, or unless they are built by some nice liberal academics who are building free robots. The main difference would be, there’s a nominal idea in the law, that robots should have this right to become free and autonomous after ten years of ownership. And that does make a difference. It makes a difference in that it can motivate the robot to perform well at their job. And it means that you do have free robots running around who can come up with their own ideas about how the laws should work and how robot society should be run. There is a robot neighbourhood in Vancouver in the book and presumably there are other robot neighbourhoods around the world.

I guess now I’ve talked myself into saying there is a difference, because if they didn’t have those nominal laws, which are just a kind of minimum protection, there wouldn’t be any free robots. Or if there were, they would be illegal, or they would be extremely marginalized. So at least there’s this possibility that robots can be free, a little bit. But there is still a lot of prejudice against them. We see all these microaggressions against the robots who have jobs alongside humans. Humans will say things like, ‘a robot can’t come up with an original idea, it has to be programmed’. They have to overcome a lot of hatred in order to exist in the world as free creatures. It makes me think of civil rights in the United States. We nominally have all these laws that protect Black people in the US and other people of color. But there’s still so much prejudice and there’s still so much systemic racism that people of color are really struggling. So there’s the laws on the books, but then there’s the reality of how the social world functions. Robots are trapped in a place where they kind of have these rights, but they also don’t have a lot of rights. And even when they do have rights, people act as if they don’t.

"In this universe, the policymakers who are framing these laws don't present those laws as ‘humans are now slaves’. They say ‘this is the right to be indentured’. These are the human rights laws that allow you to have the right to sell yourself to a corporation or an individual for however long you agree. It’s portrayed as a new kind of freedom"

Annalee Newitz

Sarah: Another component of the book that intrigued me was that there were different classes of robots: some robots were adopted by humans and then raised autonomously with the ability to go to university and get a job like a human would, while others were developed indentured, and then used for specific use cases. But even on top of that, in your book, humans could be indentured just like robots, meaning that there were robots that received better treatment and more freedoms than some humans did. I’m wondering why you chose to incorporate that into your book.

Annalee: There’s a couple reasons. One is that in science fiction there is an age old trope that there will be a robot uprising, and we will all be enslaved. And I asked ‘what would be a realistic way that we would all become enslaved as a result of robots achieving a human level of intelligence?’ And so in my book, humans do become enslaved because of robots, but it’s this really sneaky legal and policy-related set of rules that caused it to happen. Basically, once robots are granted human status, corporations that make robots push for this idea that they should still be able to pay off the cost of making the robots by owning them for ten years, by having them be indentured for ten years. And so once you have that established, that a human-equivalent being can be property, then it’s easy enough for corporate lawyers or other kinds of lawyers to argue, well, humans can be indentured to them because what could be more human-equivalent than a human? So the moment when robots gain a little bit more rights, that’s the moment when humans lose them. In this universe, the policymakers who are framing these laws don’t present those laws as ‘humans are now slaves’. They say ‘this is the right to be indentured’. These are the human rights laws that allow you to have the right to sell yourself to a corporation or an individual for however long you agree. It’s portrayed as a new kind of freedom because, for a lot of people, maybe it is better than living on the street or maybe it’s better than never getting tenure or something like that. Maybe it’s better to be indentured. That’s what’s happened in this version of future history. That was my sneaky way of trying to talk about the fact that, first of all, I feel like we never really got rid of indentured servitude. I think we live in a world now where we have a lot of things that are basically indentured servitude, but we don’t call them that. And also a way of talking about how slavery has always been a cornerstone of global capitalism, and I don’t think that’s ever really gone away. We’ve papered it over and we’ve come up with regulations that have gotten rid of some of the worst and most overt examples of slavery, but we still treat labour essentially like slave labour. I don’t mean to exaggerate it because, of course, there’s types of slavery we’ve had in history that we just don’t have anymore. But we could have it again and I don’t think it’s unrealistic to think that it could come back in a big way if we lived in a world where people were suffering from much more poverty than they are now. Or if there were vulnerable populations that were basically climate refugees, people who’ve been ripped out of their homes and don’t really have a place where they feel that they belong and don’t have a support network. You could see how people in that situation might actually go along with the idea of indenture as a viable opportunity. I think that it’s really important for us to realize that slavery and indentured servitude, just because you make laws against them doesn’t mean that they go away and it doesn’t mean that they can’t come back. Especially because when you look at the history of human civilization, slavery is always there, either at the center of the civilization or certainly lurking at the edges of the civilization. Humans just haven’t gotten over it quite yet. And I think we still have a long way to go before we can say that we’re past that form of labour organization.

"For whatever reason, and it's probably quite complicated, facial recognition is a place where people are drawing the line. Not just lawmakers, but the general public. It feels somehow more invasive to people than their data stream. And maybe that's just cause it’s super concrete: this my face, attached to my head."

Annalee Newitz

Sarah: Switching gears a little bit. Surveillance plays an important role in your book and maintaining the positions of powerful actors. The protagonist, Jack, takes a lot of precautions related to data that she leaves behind in order to evade law enforcement, like encrypting her data. And some of this is similar to the precautions that current activists are having to take to safeguard themselves and their work. In Hong Kong, for example, the government is using facial recognition to try and identify protesters, and protesters are responding by using masks and hairstyles and makeup to avoid being recognized. So along these lines, what role do you think policy should play in safeguarding people and their data and their identities, if any?

Annalee: It’s a huge question and it’s something that in the United States, we’re obviously just starting to think about right now. Here, where I live in San Francisco, we’ve outlawed the use of facial recognition for law enforcement, which I think is a good start. Obviously, there’s sneaky ways that law enforcement can get around that kind of regulation. As we just saw, when there’s a new Peter Thiel-funded company that’s private and offering services to police departments that want to do facial recognition using data scraped from social media networks. So you can do all kinds of nasty things to get around those kinds of policies but, certainly it makes it a lot harder. For whatever reason, and it’s probably quite complicated, facial recognition is a place where people are drawing the line. Not just lawmakers, but the general public. It feels somehow more invasive to people than their data stream. And maybe that’s just cause it’s super concrete: this my face, attached to my head. It concretizes something that feels really abstract when people try to say, ‘but look, your data trail generates something that is just as identifiable as a face’. It’s still hard to grasp how that works. So I do think that we definitely need policymakers stepping in. We need policy to handle things like facial recognition, and its use by law enforcement and government. I definitely think we need policy dealing with how our online data is sold. And again, California (my state) is kind of leading the way on legislation around companies that sell personal data. Now it’s a lot harder for those companies to do that. Larger companies have to be transparent about what data they’re selling and they have to give customers the opportunity to say no to having their data sold. That’s the California Consumer Privacy Act, which is now the law. It’s super badass and it’s actually forming the basis for other states’ laws in the U.S. around this. Unfortunately, it’s a very narrow effect right now. It affects large companies where more than 50% of their revenue comes from selling data. So essentially any company that is largely in the business of selling people’s private information. And now we’re entering a phase in California where Facebook and Google are saying ‘well, that’s not us, we don’t sell data’. And perhaps in a technical sense, that’s true, especially for Google. I’m not sure about Facebook. So, there’s going to be a lot of debates over who is captured by this law. But the point is that we do have a law on the books that protects consumers’ private data. We need that for sure. And I think that, in 20 years, we’re going to look back and be like, ‘well duh, that was so obvious’. As soon as the Gen Z folks are coming of age and making laws, they’re not going to buy into any of this bullshit where it’s like, ‘oh, no, it’s all fine, we’re going to sanitize this data’. They’re going to say, ‘nope, we know, we grew up with this, this is obviously wrong’. I look forward to that.

Sarah: I found it interesting that, in the book, universities and independent research labs were the primary, if not only, space for innovation and experimentation and protest. In what ways do you think that reflects current tensions between researchers and larger corporations, not just in the realm of pharmaceuticals, which was the main focal point in Autonomous, but in relation to larger companies today?

Annalee: That’s a really good question. I think that’s a result of my focus in the novel, because I have worked in academia for a really long time. So naturally all of my books and stories have academics because I used to be an academic. I love universities. I think of them as a site of protest and a place where people can do research that isn’t just for profit. We do get glimpses in Autonomous of these groups of pirates and activists who are only very loosely affiliated with universities. And because they’re working with technology and science there is always going to be some overlap because these are people who have probably been educated at least partly in universities or they may be working a little bit with universities to get equipment. It’s true that we don’t see any groovy nonprofits or super nice corporations in Autonomous but I have to believe that, somewhere out there, there is one. If I were to write a billion more books set in the Autonomous world, we would find out that there is pro-social innovation happening in the for-profit world, but just not very much of it. It’s one of many things about Autonomous that I think is maybe too utopian: to imagine that somehow universities survive intact and they would have funding and they would be somehow lightly protected from corporations ripping them apart or destroying the careers of the academics in the novel who challenged the corporation. Maybe that’s unrealistic, maybe in 150 years we won’t even have universities that are independent from corporations anymore. Maybe Google will have its own Google engineers.

"...there's a clear connection between working alongside robots and kind of expecting people to function physically, like a robot arm. That's another place where labour policy and labour law could step in....there has to be some way that we can frame what kinds of labour are healthy and safe for humans and not confuse that with what's healthy and safe for robots."

Annalee Newitz

Sarah: Something that stuck out to me while reading the book was whether there is a link between having intelligent robots and the desire to make human employees more optimized through designer drugs, mainly for the purpose of workplace productivity. I was thinking this over and wondered, does the development of robots force humans to manufacture drugs that would increase their performance in order for them to be competitive?

Annalee: I’ve heard that scenario many times, it’s a common idea among futurists, and I don’t think it’s wrong. But I think having it be causal like that, as in ‘we developed really great automation and so people try to be more like automation’. I don’t think that’s the right order. I think that both things are developing in parallel: people wanting to be more efficient at work using drugs or using meditation or using some other thing and the idea of developing automation that could do human jobs more efficiently. It’s all about this fetish for hyper efficiency. I think it makes sense that a culture like ours in the United States, that is obsessed with that kind of efficiency, both mental efficiency and physical efficiency, would produce humans that are very medicated and robots that are very capable of engaging in human practices. Both of those things are expressions of the same kind of cultural fetish for being a creature that’s entirely focused on productivity. 

Sarah: And what are your thoughts on workplace automation and the role that policymakers play in relation to these developments today?

Annalee: That’s a great question. I think that workplace automation is happening. And so now is really the time for policymakers to be coming in and thinking about what we’re going to do to regulate it. One of the things that we’re seeing happen right now and in the United States is a lot of our labour laws that have been in place for generations are now being relatively abruptly weakened, if not outright taken away. Human workers are losing a lot of their protections; unions are being incredibly hobbled. One of the things we have to think about is just worker rights. What are the rights that workers have? Do those rights include something related to automation? Do we want to also think about workplace safety? In some cases, obviously, automation can create a dangerous situation. Do we want to regulate things like how efficient we expect people to be? Reports were conducted on injuries at Amazon warehouses and the general public was really shocked that there were so many injuries. It was never clear whether it was because working with robots made people try to be so much more efficient that they were injuring themselves or if it’s always been the case that working in warehouses caused injuries that nobody bothered to count. Maybe Amazon is just in keeping with the industry standard of breaking the people that work there. But I think there’s a clear connection between working alongside robots and expecting people to function physically, like a robot arm. That’s another place where labour policy and labour law could step in and say like, ‘look, a human cannot lift more than 20 things per X amount of time’. It sounds weird to get into details like that, because maybe that’s too specific. But I do think there has to be some way that we can frame what kinds of labour are healthy and safe for humans and not confuse that with what’s healthy and safe for robots. We may wind up with some kind of math where robots get outlawed in certain industries, based on labour law, and that might cost us a little bit in efficiency, but it might actually benefit us in the end because we have better working environments. It’s hard to say exactly what form it’s going to take but I do think that this is ripe for labour lawyers and workplace safety regulation.

Sarah: So I realized we have 10 minutes left. So I just have one last question for you to round this up. What do you think the role of science fiction is or should be in policymaking or public discourse on social and policy issues?

Annalee: I think it has a big role to play. I often tell people that I think that policy is actually just hard science fiction, because when we make policies about anything from urban planning to regulations around the environment, it’s always based on an understanding of how things will be in the future, and how we think human culture will function. A lot of the time, the worst policies are the ones that don’t take the future into account, that say, ‘all right, everything is always going to be the same, so we’re always going to design cities in x way, or we’re always going to have labour law look like this’. That’s when laws get wrecked, because people say, ‘well, that’s no longer relevant, I guess we don’t need labour law anymore’. Instead of saying ‘labour law is an evolving thing and now we need new labour laws’. I think science fiction can really help policymakers and lawmakers think about possible future scenarios that they might want to legislate around. Maybe they want to make legislation to prevent a particular future or to make one come to pass or take into account how the world is changing. I love the idea of policymakers reading science fiction and working with science fiction writers. Even if sometimes you think about a future only to say ‘that’s totally unlikely, give me a break’, at least you thought about it and eliminated that possibility in your own mind. Sometimes shit comes to pass that is really unexpected and so it helps to have read extremely strange science fiction to understand what’s going on. Certainly that’s the case in the United States right now and in England too. We’re having these wacky surreal futures and we wouldn’t have been as prepared for them if we had not been reading really strange science fiction. 

This interview has been edited and shortened for publication.

Through the Policymaker’s Guide to the Galaxy series, Brookfield Institute’s team interviews leading science fiction authors, both Canadian, and international. Join us as we examine the future of work and the economy, on Earth and in space!

For media enquiries, please contact Lianne George, Director of Strategic Communications at the Brookfield Institute for Innovation + Entrepreneurship.