Note: This is a guest post authored by Yvonne Kiely. Yvonne is a journalist writing from Ireland, with a keen interest in examining and challenging the structures of popular normality. Music, gender, sexuality, tech, and politics are also topics of interest. (You can find her on Twitter here.)
You’re going to be hearing about bots more often from now on. Bot companies are multiplying in the technological landscape. They’re being developed to engage with the needs of humans more than ever, and technological advancements in AI mean cognitive evolution in machines is more of a tangible possibility. We’re not at the stage where bots can write a symphony, or an investigative article, or fall in love. But they can write articles about earthquakes, and the weather—things that are predictable. They can also be your digital lover, or collaborate with you on your next novel. There are still issues to be ironed out, more tests to be run and far more to understand about AI in society. Yet, bots are testing our own understanding of the human mind.
Snatchbot and others are opening pandora’s box
Browse in a search engine for a chatbot company, like Snatchbot.me, and you’ll find that the technology to combine bot technology and language is available for anyone to use. Depending on your own knowledge of Machine Learning and Natural Language Processing, you have the ability to test the limits (and possibilities) of bot learning.
My question is, have humans proved themselves responsible enough, and reasonable enough, to play around with a potentially new form of intelligence?
We’re trying to make human-bot interactions indistinguishable from human-human interactions. Bots are being developed to be better tools for the economy, human well being and sexual fulfilment. They are being imagined as tools for marriage counselling, knowledge sharing, creativity, and even policing. The processes that allow us to make elaborate, imaginative, or convincing arguments, that allow us to manipulate conversations, change people’s opinions, and to use language independently to achieve the best possible outcomes are being given the AI treatment. Some experts say that we do not know enough about the interactions between psychology and technology to make claims to an understanding of AI. If these are the beginnings of a digital revolution, then the future looks mecha, and labour looks more bot than human.
Racist and sexist bots
When Twitter launched their own AI bot called Tay in 2016, it didn’t take long for it to learn racist and sexist slurs from its interactions with users and it was soon taken down. From the perspective of a social experiment, we humans failed and proved to ourselves that we do not understand the implications of treating AI software as a playground for prejudiced trolls.
Earlier this year in the Facebook Artificial Intelligence Research Lab, bots Bob and Alice communicated with each other, using English words that were given them in order to reach a conclusion to their negotiation. They built upon what humans gave them to make sense of things in a different way, and the study was a success. While this was not a milestone in AI language development, these kinds of ripples were felt across the internet and people were considering the consequences of independent AI communications. While Bob and Alice didn’t actually achieve this, the discussions that emerged were at least an interesting simulation of how humans might genuinely react to these kinds of leaps in AI technology. Clearly, there are some trust issues.
What drives experimentation with bot technology forward is the desire to replicate human learning and that wonderful independent thought that makes us unique. If we endow AI bots with these capabilities and then expose them to a world that still considers them a ‘game’, where the goal is to see how quickly they can be turned into a xenophobic jukebox, Bob and Alice might, logically, turn to each other and devise their own solution to this very human problem.
We have a long history with mecha fantasy in film. These fantasies branched out into dystopian prophecies and political cult classics, and got to the point where robots and AI in films were attributed with greater dimensions of intellect, thought, and subjective experiences—the message now is that they are not always the ‘bad guy’, and even when they are they are not always the villain. In 2015’s Ex Machina, the self-aware robot Ava took control of her life, and in doing so deceived and harmed her creator. It’s hard to label her actions as wrong, or to condemn her lack of loyalty to the man who gave her consciousness. The world he gave her was one of isolation, containment, and observation. It is interesting to imagine if Ava was a human made of flesh and bone like you and I—her creator would become her abuser.
Are humans wise enough to mentor AI
If the goal is to make bots more human than machine, there needs to be a discussion about the kinds of existence that we provide for them, and what we expose them to. The rights that we so strongly defend in our own lives are as relevant to an intelligent and self-aware bot as they are for you and me. The boundaries between human and machine are being disassembled; it’s being called a technological paradigm shift. And we’re still playing racist and sexist games with bots. We could spend this time reassessing our status as an intelligent life form.
If that doesn’t work out, I wish Bob and Alice the best of luck in their 2060 campaign.