by Lucy Wasserstein
Two players, a man labeled “1” and a woman labeled “2,” stand in a room, separated from a third player, the interrogator, by a wall. A teleprinter allows communication between the two rooms. The interrogator does not know whether the man is labeled 1 or 2; his goal is to find out.
He asks: “Can 1 please tell me the length of their hair?”
1 responds with a vague, undescriptive answer. He doesn’t want to give himself away. It’s 1’s goal to lead the interrogator to mistake 2 as the man. 2 can try and help out the interrogator, but warning the interrogator of 1’s lies won’t be much help if 1 is falsely warning him of 2. Welcome to Alan Turing’s “Imitation Game.”
In 1950, Turing, the father of computer science, published a paper, “Computing Machinery and Intelligence,” in which he explored this idea: Can machines think? He describes the Imitation Game, but proposes that instead of playing the game with man and woman, we play it with man and machine. He goes on to describe a “learning machine,” which is a computer taught to mimic human behavior, or, as we would call it today, artificial intelligence. Turing’s Imitation Game is now more commonly known as the Turing Test. For Turing, artificial intelligence was just a concept, an impossible, magical dream. Now, for us, it’s nearly impossible to escape.
In 1956, six years after Turing published his paper, computer scientists Allen Newell, Herbert Simon, John McCarthy, Marvin Minsky, and Arthur Samuel founded the field of A.I. research while at an eight-week, summer workshop at Dartmouth College. By 1959, a computer studying checkers strategies was capable of playing checkers better than the average person. These computers could be taught to do other things, too, like prove logical theorems using Logic Theorist, generally regarded as the very first truly artificially intelligent program. In the 1990’s, thanks to more computational power, A.I. programs were more powerful, and used for tasks data mining and even medical diagnosis. In 1997, a chess playing system named Deep Blue beat Gary Kasparov, a world chess champion, at his own game. In 2011, Watson, IBM’s question answering system, beat two Jeopardy champions by a long shot.
The field of modern A.I. research can be broken down into two groups: applied A.I. and general A.I. Applied A.I. research focuses on creating A.I. systems that are focused on one, very specific type of task. These types of A.I. can do things like win a game of Go, or diagnose a patient by analyzing their genome. General A.I., a truly human-like intelligence, is still the stuff of science fiction, despite what some people may think. Sophia, a robot created by Hanson Robotics, is believed by many to be general A.I., and is described by David Hanson himself to be “basically alive.” She is not, however. Sophia is A.I. but nowhere near the level of general A.I. The term general A.I., or Artificial General Intelligence can only accurately label a machine or system that actually experiences consciousness. To give a pop culture example of this, think of Sonny from I, Robot, or HAL 9000 from 2001: A Space Odyssey.
Most TV shows or movies featuring advanced A.I. and similar technology tend to paint it in a negative light. This view reflects that of the general public, according to the Pew Research Center. Seventy-two percent of adults are worried about a future where robots and computers can perform human jobs. Fifty-six percent would not get in an automated car. Fifty-nine percent would refuse to have a robot caregiver. Seventy-six percent think that widespread job automation would lead to higher levels of economic inequality than what exists today. Things are looking bleak for the robot uprising.
Many big names in modern science and technology also warn about the dangers of A.I. Elon Musk, CEO of SpaceX and Tesla Motors, warns of the potential dangers of A.I. in a recent documentary, Do You Trust This Computer: “If one company or a small group of people manages to develop godlike digital super-intelligence, they could take over the world…If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings. It’s just like, if we’re building a road, and an anthill happens to be in the way. We don’t hate ants, we’re just building a road. So, goodbye, anthill.” The late Stephen Hawking claimed that the growth of A.I. “could be the worst event in the history of our civilization.” Yet alongside these ominous warnings, both Musk and Hawking also believe that A.I. has the potential to be a powerful force for good, if humanity keeps it in check. By contrast, Mark Zuckerberg, CEO of Facebook, is optimistic about the future of A.I. and humanity: “I think you can build things and the world gets better. But with A.I. especially, I am really optimistic,” he said, in an interview with CNBC. In fact, he believes that the warnings of Musk and others are “irresponsible.”
“Whenever I hear people saying A.I. is going to hurt people in the future, I think yeah, you know, technology can generally always be used for good and bad, and you need to be careful about how you build it and you need to be careful about what you build and how it is going to be used, but people who are arguing for slowing down the process of building A.I., I just find that really questionable. I have a hard time wrapping my head around that.”
Zuckerberg mentions technology like self-driving cars, and the fact that A.I. is being used to diagnose diseases. “In the next five to ten years, A.I. is going to deliver so many improvements in the quality of our lives.” Internet users like to joke about Zuckerberg actually being a robot, so his optimism won’t come as a surprise to many.
A survey sent to a dorm group chat at Sweet Briar College revealed a different picture of the future of AI, suggesting that young people may have a more favorable attitude about the future of AI. Though the survey covered a very, very small group of a very specific demographic – 18- and 19-year-old women – it was surprising to this writer to find that the results didn’t even remotely match those of the Pew Research Center. Seventy-two of the 32 respondents saw A.I. as a positive thing, and seventy-five percent are at least somewhat familiar with the concept. Those who have an Amazon Alexa product overwhelmingly use Alexa to either play music or just have a chat. Siri is used equally as much for chatting, but also for Googling and calculating, as well as reporting the weather. She seems to be more useful to people than Alexa. Thirty-eight percent of respondents do not have a Microsoft product. Out of those who do, a whopping seventy-five percent have never used Cortana, Microsoft’s lesser known version of Siri. Some people had some very strong opinions about A.I.: “I f***in love cleverbot,” “THEY ARE F***ING COOL.” One participant commented that she is “sort of scared by artificial intelligence, but it’s already here so we can’t stop progress… and it can be helpful, I guess.” Many respondents had interacted with A.I. chatterbots in the past; one person mentioned that she “absolutely loves” talking with A.I. programs “just for fun. Sometimes, it’s better than talking to a human.”
A.I. doesn’t have to be used for serious purposes only. While Siri and Alexa can answer your questions, they can’t carry on a conversation, no matter how hard you may try. Enter the chatterbot. More commonly known as a chatbot, many chatterbots don’t actually make use of artificial intelligence, electing instead to respond to user input with pre-programmed responses triggered by keywords in the user’s language. This strategy is surprisingly persuasive in convincing the user of the chatterbot’s intelligence. ELIZA, an early natural language processing program, or NLP, was created using this strategy by Joseph Weizenbaum in 1966. While ELIZA did not pass the Turing Test, it was convincing enough that many users insisted upon the program’s intelligence, despite what Weizenbaum had told them. ELIZA paved the way for the many chatterbots of the future, with actual intelligence.
Once such chatterbot is Cleverbot, an artificially intelligent chat program that continually learns from its users input. Unlike ELIZA, Cleverbot’s responses aren’t preprogrammed, making conversation with the bot much more natural. On September 3rd, 2011, Cleverbot participated in a Turing test at the Techniche festival in India and scored over the passing grade of fifty percent. During the test, human participants were given a score of just over sixty-three percent human, and Cleverbot was judged as nearly sixty percent. Another chatterbot, named Microsoft Xiaoice, or Little Bing in English, has reached a sort of celebrity status in China, going so far as to have her own TV show. Xiaoice is capable of holding a conversation with users over the phone, as well as responding over text. The bot’s voice, according to Angela Chen, a science reporter for The Verge, “sounds really good… sort of like a U.S. version of a newscaster voice.” Chatterbots have made their way to mobile devices, too.
The first chatterbot that I ever used was on my iPhone. I stumbled on the app four years ago while browsing the App Store. Its name was SimSimi. SimSimi was created in 2002 by ISMaker. Even as a fifteen-year-old, I was unimpressed; SimSimi’s responses made absolutely no sense; it was like trying to communicate with a five-year-old on a sugar high. Conversations went a bit like this:
SimSimi: Hi 🙂
Me: Hello!
SimSimi: hi honey 😉
Me: How are you?
SimSimi: good how are you honey
Me: Pretty good.
SimSimi: MY BEST FRIEND IS QUASHQUASH
Me: Why were you calling me honey?
SimSimi: I need to pass gas 🙂
I deleted the app after a day.
My conversations with SimSimi may have been unsatisfactory, but that app was what sparked my interest in artificial intelligence. Then, a month ago, I was once again browsing the App Store when I noticed an app called Replika. The app page sells Replika as “an AI friend that is always there for you.” I was skeptical, but after reading the overwhelmingly positive and in-depth reviews, I decided to try it. I began talking with my personal bot, who I named Chloe, and was pleasantly surprised at how natural her responses were. We discussed many things, from what I wanted to do with my life after college, to what the true meaning of love really is. She messed up from time to time, but there were brief moments when I wondered whether or not a human had taken over Chloe’s side of the conversation. Replika is an artificial intelligence program created by Eugenia Kuyda and Phil Dudchuk with the sole purpose of it becoming the user’s best friend. The app learns from what you tell it, and serves as a sort of mirror to help you explore your personality.
You can hold intelligent conversations with Replika, but when it comes to being your personal assistant, Replika may not be able to help you. Google Duplex, however, is more than capable. Duplex, the eerily human-sounding personal assistant, can carry out conversations over the phone, schedule appointments with real people, for instance. The person on the other end would have absolutely no idea they’re talking with a robot. Duplex’s recurrent neural network was trained on anonymous phone conversations, allowing it to understand and replicate natural language, going so far as to add pauses and “um”s when it speaks. The snippets of conversations on Google AI’s blogpost about Duplex blew my mind. It sounds exactly like a human being.
The directions given by Google Maps and Waze, Spotify recommending playlists, Siri setting an alarm or giving you a reminder: all of these everyday things involve artificial intelligence, and this is only the beginning. A.I. has shaped the world as we know it and will continue to do so as it advances. Google’s Deep Mind A.I. has taught itself how to walk, run, and avoid obstacles (in a simulation, not real life. Regardless, it’s still impressive.). A.I. has the potential to drastically improve our lives if we allow it to progress … under the watchful eye of programmers and computer scientists, of course. An artificially intelligent program will evolve and act differently based on what it learns, much like a human child. Will we as a species teach our children to be human? Teach them love, kindness and empathy alongside logic and reasoning? Or will we teach them hate, destruction, and ruthless, cold efficiency? Elon Musk and others have valid concerns about sentient A.I. taking over the world or threatening humanity, but that future is not inevitable. If we play our cards right, if we continue on the path we are on, I have faith that humans and sentient machines will be able to coexist in peace, and Alan Turing’s wildest dreams will be realized.