How Artificial Intelligence in Films Differs From Reality

Nikolas Kairinos, CEO and founder of Fountech, takes a look at how Artificial Intelligence is presented in films, and compares that with the reality.

For many, the only reference point they have for artificial intelligence (AI) comes from Hollywood films. For the best part of a century, however, Sci-Fi films have been awash with dystopian visions of robots taking over the world. Besides being widely inaccurate, these images have had a drastic impact on the way people understand artificial intelligence and its practical capabilities at this point in time.

According to recent research by Fountech, one in four UK adults think that AI could be responsible for the end of humankind. Granted, this view sits at the more extreme end of the scale, and the majority of UK adults actually believe that AI will do more good than harm to the world (a view shared by 62% of respondents to the survey).

Nonetheless, the first statistic serves to highlight the general lack of societal awareness when it comes to AI. To dispel some of the common fears and misconceptions about AI, which no doubt stem largely from inaccuracies in pop culture, I’ve listed below some of the main points that Hollywood gets right and wrong about this technology.

Expectation vs reality  

From Metropolis in 1927 through to cult classics like Blade Runner and The Terminator in the 1980s, and more recent blockbusters like Minority Report and WALL-E, AI has been a popular theme for production companies for a long time now.

The trend is also picking up; between 2010 and 2018, there was a 144% increase in the number of AI-themed films released compared to the decade before. In one respect, this reflects a growing interest and awareness of the technology, which is undoubtedly a good thing. But with more movies portraying sensationalist interpretations of AI, it is also means there is a greater risk of exacerbating misapprehensions about its day-to-day applications in the modern world.

Firstly, we must separate the two most common ways AI is presented in film – namely, as a cyborg with human-like or super-human abilities that can assist or harm mankind (think I Robot), or as a more holistic operating system in the form of a wide network of technologies that are learning, communicating and acting (i.e. The Matrix).

The former has little meaningful connection with the way that we would use AI today and lends itself better to action films in the Sci-Fi genre; it is not the direction development has taken, or indeed will take in the years to come.

However, the second interpretation is closer to the truth: AI is typically manifested in non-physical computer programmes applying human-like intelligence and decision-making to complicated, laborious and data-intensive processes. Think Amazon recommending you a product based on your search history and driverless cars knowing when to apply the brakes to prevent a crash.

There’s no doubt that artificial intelligence has come a long way since the term was first coined in 1956. However, its abilities are nowhere near what we might assume from watching these films; indeed, they are applied to far more mundane matters than controlling our lives or threatening the planet.

Debunking misconceptions created by Hollywood 

Let’s explore this within the context of some classic Hollywood films. Perhaps the greatest example is 2001: A Space Odyssey, and Hal (or, the Heuristically Programmed Algorithmic Computer). Despite originally being created in order to control the systems of the Discovery One spacecraft – the prime setting for much of the film –  the machine quickly begins to “think” for itself and take its own course without the involvement of the human crew.

So, what’s wrong with this picture? In reality, we are a long way away from AI being able to function without human input. Even one of the world’s most progressive supercomputers – IBM’s Watson – must work alongside humans in order to function. Indeed, it could not perform functions like identifying potential symptoms and treatments in medical patients, and helping finance companies manage risk, as it does today without the input of humans and developers.

The way the AI performs these abilities is down to much more comprehensible and human-engineered abilities. In the healthcare industry, for instance, physicians are using the natural-language processing (NLP) abilities of the AI to enhance their capacity to offer accurate and effective medical assistance. In simple terms, NLP is the sub-field of AI that is focused on enabling computers to understand and process human languages; it is through this process that AI can understand human input and extract data from it, before then analysing it to solve real-life problems. By inputting patient data, therefore, the AI can then use NLP to identify potential symptoms and treatments.

In other cases, Watson’s vision recognition is being used to help doctors read scans such as X-rays and MRIs to better narrow the focus of a potential ailment. This entails the machine processing the raw visual input by quickly and accurately recognising and categorising different objects – after which it is able to offer an indication on how best to proceed.

Meanwhile in the finance sector, Watson’s ability to offer answers to questions is being harnessed to help offer financial guidance and help companies manage financial risk. It does this by analysing questions and offering tailored answers by drawing from huge data stores. Artificial intelligence can sift through, process and analyse huge volumes of data at lightning speed, not only offering accurate responses to given questions, but also doing so at a much quicker speed than a human ever could.

The general point more broadly is that even this sophisticated AI cannot function without constant input and feedback from a human. For the time being at least, we can put to rest the idea that artificial intelligence will overtake the human race.

For decades, AI has been misrepresented in Sci-Fi movies, but we should not let this blinker our view of how this technology can enhance the world around us. In one respect, those working in the AI industry should be happy that AI is finally being presented to a mainstream audience, and at the very least, inciting interest and curiosity in this technology.

AI can solve problems and achieve tasks that we previously considered impossible and will undoubtedly open doors to countless opportunities so we can make the world a better place. Those working within the AI industry have an important role to play in addressing common misconceptions about the technology, and encouraging people to explore its immense potential.

 

Nikolas Kairinos is the chief executive officer and founder of Fountech.ai, a company specialising in the development and delivery of artificial intelligence solutions for businesses and organisations.