Diana: We were talking before about basic rights and humanity and I wanted to explore those themes a little bit more. Particularly in Corporation Rim, humans seemed to have outsourced violence, security, justice, and safety, but they still need humans for certain jobs. One of my favorite quotes, and I’m paraphrasing, but the main character says “I like the humans in the (entertainment) feeds much better, but we can’t have one without the other.” What do you think about the things that they, in the Murderbot world, and we, in our world, put value on what humans can or should do?
Martha: A lot of the work they outsource to bots would be almost impossible for humans to do. The big cargo bots and the haulers move things a lot more efficiently than humans could and they can also work outside the space station to move cargo from ship to ship. You can have a human operator inside but it would be incredibly dangerous and not very productive. The things that they are not outsourcing (to bots) is scientific research; the development of their media, storytelling, acting, music, writing, all the artistic work involved in entertainment, anything involving creativity. Murderbot makes this point, which you mentioned, that it is humans who create the entertainment feeds, and humans who invented the cubicles that SecUnits use to repair themselves. The bots in the story are not at the level where they could duplicate that creativity or the ability to take the information gathered by the bots during research and use it to inform theories about what is going on and what it means.
Diana: Related to that. I think science fiction is a really good tool, particularly when it’s in a world where there’s space travel and planetary settlements, to heighten our awareness as readers of the human dependence, current and future, on technology, particularly when that technology is sentient.I was wondering what do you think our biggest blind spots and opportunities are when it comes to technology as we are now. What do we get wrong about AI?
Martha: Currently, we’re a world away from developing and sentient AI, if that’s even possible I wouldn’t want to say it’s not possible because so many things we have now we wouldn’t have thought possible. I think we are having trouble right now with how the technology is misused and how it can be potentially misused. I think [we are] very behind in legislation and forming rules and laws about how it cannot be used, like to take in this information and basically tailor it to influence people on a large scale. I’m not particularly an AI expert, so I’m looking at it as a layman but that’s my primary concern.
There is a show called Better Off Ted that came out several years ago about a big evil corporation and there’s a bit where they have the elevator designed to operate without buttons. So it recognizes people and takes you where you need to to go. But it doesn’t recognize Black people, the Black executives and scientists who work there. So they can’t get anywhere in the elevator. And it’s a metaphor but it’s also a way that shows how AI right now is not any better than the people who program it and the people who feed the information in.
Diana: A lot of Murderbot’s transformation does deal with discovering what guilt is and responsibility is, so I was very curious about that kind of distinction, the responsibility of being human versus not. As a human you have certain responsibilities, you have certain accountabilities, and as a bot, or as a piece of equipment, you’re not accountable, the company that owns you is. The line between the times when Muderbot was responsible for certain acts and the times when it wasn’t is invisible to most of the world, much like the fact that it is or isn’t a human. How do you envision that conflict of responsibility for actions of a technology that makes decisions. In the case of our real world, they’re not sentient, But I think it’s an interesting parallel: when do you assign that responsibility?
Martha: If they’re not sentient, like in our world, then it’s the people who programmed it that have the responsibility. They should be checking to see that the program or AI was learning, like the case of the driverless car that hit someone because it didn’t know that a bicycle wasn’t something you could hit. It’s a big simplification of what happened, but it was the responsibility of the programmers who should have been looking at a range of things for it to react to and to make sure it could be accurate, there should have been more testing to be sure that there was no gap in these reactions. I don’t understand why a driverless car wouldn’t stop at any motion in front of it. When a human is driving, you’re looking for movement. My foot is going to the brake before my brain even fully processes that. When it is not sentient it is definitely the fault of the person who programmed it. And if it’s a sentient being that has to be programmed with information, I’m still inclined to think it’s the person who programmed it who is responsible, who told it it didn’t have to stop for bicycles.
At some point, there was somebody who decided it was okay to hit bicycles or decided that it was okay not to fully test. It always comes back to a person or a corporation. It’s that old adage: garbage in, garbage out.