Is the fear of Artifical Intelligence justified?
- Scott Murphy
- May 20, 2016
- 4 min read
Updated: Nov 23, 2019

The ability to create a remarkable machine which learns and interacts with the world like a human brain does comes across as a madman’s dream brought forth by sci-fi novels and movies. The very thought of computers developing themselves into more intelligent systems brings up arguments questioning the dangerous concept while invoking fear stimulated by the power such machines would have. The concept is known as Artifical Intelligence (AI), and the idea is nothing new. In fact, the engineering of AI systems is responsible for much of the massive technological advancements we have experienced over the last 25 years.
Although humanity has always played with the idea of AI and used it to make our lives easier, we are now reaching a point where AI is going to be used to do things we otherwise would not be able too. A prime example of this lay in Facebook’s plan for the future, where they are creating AI which builds AI. We are beginning to not only use this madman’s dream to run our lives, but we are now entrusted it to advance our technology beyond what we can do alone. If Hollywood’s prediction proves true, this doesn’t end well for humanity and the reality of Terminator’s Skynet could be just around the corner.
Earlier this year Microsoft launched an AI bot into the world via Twitter, crashing and burning their creation in the process. It could tweet and react without guidance or pre-hash sentences from her developers. Designed to mimic the language of a 19-year-old American girl, this AI named Tay learned what to say from other Twitter users instead. Her first release into the Tweetosphere lasted around 16 hours before she was shut down as a result of tweeting racist, homicidal and incredibly inappropriate viewpoints and remarks.
Unsurprisingly, Tay’s learning curve was abused by the online community and although resulting in what is an amusing story, her time alive in the world forms nothing more than the setting of a tragedy. Microsoft issued an apology shortly after, saying they did not anticipate the amount of abuse their system would receive. There is glaring question resulting from this experiment though, how do we teach a computer what is right and what is wrong, and furthermore, who decides this?
Despite the failed launch of Tay, Facebook announced they would be incorporating AI bots to respond to questions posted to Facebook Pages. The application of AI in Facebook runs most of its’ backend already, in fact whenever you upload a photo to its servers, an AI system has learned what your friend’s faces look like and even tags them for you. On the 4th of January this year,
Facebook founder and CEO, Mark Zuckerberg, said on his personal page, “My personal challenge for 2016 is to build a simple AI to run my home and help me with my work.” After their first quarterly conference in April though, the term ‘simple’ seems to have dropped from the tech innovator’s plan, replaced with the objective of crafting ‘True AI’ systems. What the social media giant announced during the conference is they have created an AI that can order products and services for their users through natural language, just as a customer would from a human-being in a store. On launching this service, Mr. Zuckerberg said, “So the biggest thing that we’re focused on with artificial intelligence is building computer services that have better perception than people.” Also revealed was Facebook’s plan in creating AI, which could create AI. This massive step in technological advancement, and progressing AI beyond human input, is met by competing tech giant, Google.
In early 2014, Google acquired an AI development company, DeepMind Technologies, for approximately $500 million USD and instantly renaming it to Google DeepMind. A condition of sale agreed upon by Google was the establishment of an in-house ‘AI ethics board’ which would oversee the technology and ensure it was applied safely. The members of this board remain a mystery even today, with neither DeepMind nor Google willing to comment on what the board does publicly.
In March this year, this company’s AI made headlines after it beat professional ‘Go’ player, Lee Sedol. The game ‘Go’ is a follows a simple concept but due to a large number of potential moves available, it is tough for computer systems to outsmart professional players. This defeat marks a real-world example of how this technology is already outsmarting even the most dedicated human. While this particular display revolved around a board game, the application of AI extends into healthcare, as DeepMind has acquired records from the National Health Service in the UK, bringing with it concerns regarding what a free-thinking computer system can do with such information.
As Google DeepMind experiments with the health records of the UK, “[supporting] clinicians by providing technical expertise,” Facebook and Microsoft are developing AI to replace salespeople and online personalities. Medical advice could soon be administered by a computer data banks, while sales, marketing, and entertainment integrated into our lives solely by how people interact with an artificially learning program.
The future is looking more and more like Skynet every day, but a homicidal free-thinking robot army is a long ways off. For now, the employment of ethics within this technology is a must and of the utmost public interest. While it is reasonable to assume the best interests of humanity is directing AI development, the questions into the sustainability and practically of this technology should be public rather than corporate. If AI is already outsmarting us in a boardgame, when will it begin to outsmart its very creators?
Comments