Artificial Intelligence: Do we fear that AI could be more intelligent than us?

Artificial intelligence has become a highly misused and often misinterpreted concept. Some believe that artificial intelligence will apocalyptically destroy humanity, for which Hollywood action and science fiction films are probably primarily responsible. Others worry, for example, that they will lose their jobs because of ‘smart machines’.

It is a fact that artificial intelligence has enormous potential in various fields, but with potential also comes certain risks, though not as apocalyptic as mentioned in the previous paragraph. Most of us are already taking advantage of artificial intelligence daily. For example, when we unlock our smartphone using facial recognition or when we use automatic word correction and so on. Let’s take a look at what artificial intelligence is, how it works, and how we came up with this concept in the first place.

Turing: What we want is a machine that can learn from experience. Photo: vpnsrus.com
Turing: What we want is a machine that can learn from experience. Photo: vpnsrus.com

How did it all start?

The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing, according to the website of the oldest general encyclopaedia, Britannica. In 1935, Turing described an abstract computing machine consisting of a limitless memory and a scanner that moves back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols.

The scanner’s actions are dictated by a program of instructions that are also stored in the memory in the form of symbols. This is Turing’s stored-program concept, and implicit in it is the possibility of the machine operating on, and so modifying or improving, its own program. Turing’s conception is now known simply as the universal Turing machine. All modern computers are, in essence, universal Turing machines.

The earliest mention of computer intelligence was probably in an early public lecture that Turing gave in London in 1947, saying, “What we want is a machine that can learn from experience,” and that the “possibility of letting the machine alter its own instructions provides the mechanism for this.” Turing illustrated his ideas on machine intelligence by referencing chess, but only in theory, as the necessary technology had not yet been developed. The first real AI programs had to await the arrival of stored-program electronic digital computers.

The latter happened about 50 years later, in 1997, when the so-called Deep Blue chess computer built by the International Business Machines Corporation (IBM) beat the reigning world champion, Garry Kasparov, in a six-game match. However, the computer’s victory over man in the game of chess can be attributed to advances in computer engineering rather than advances in AI. Deep Blue was equipped with 256 processors, which enabled it to examine 200 million possible moves per second and look ahead as many as 14 turns of play. You can read more about the history and development of computers in this article.

By teaming up, the opportunities of AI for Europe can be fully ensured, while the challenges can be dealt with collectively. Photo: economictimes.
By teaming up, the opportunities of AI for Europe can be fully ensured, while the challenges can be dealt with collectively. Photo: economictimes.

A landmark year for artificial intelligence

Exponential advances in artificial intelligence were later made possible by faster computers, algorithmic improvements, and access to large amounts of data. According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence. The number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects.

In a 2017 survey, one in five companies reported they had “incorporated AI in some offerings or processes” according to the online encyclopaedia Wikipedia. The amount of research into AI (measured by total publications) increased by 50 per cent between 2015 and 2019. Of course, the European Union has also recognised the potential for the use of artificial intelligence.

On April 10th, 2018, 25 European countries signed a Declaration of cooperation on Artificial Intelligence (AI). Whereas several Member States had already announced national initiatives concerning Artificial Intelligence, at the time of signing the declaration, they declared a strong will to join forces and engage in a European approach to deal therewith. “By teaming up, the opportunities of AI for Europe can be fully ensured, while the challenges can be dealt with collectively,” the European Commission stated on its website, adding that any successful strategy in dealing with AI needs to be a cross-border one.

In 2018, a large number of Member States agreed to work together on the opportunities and challenges brought on by AI. The cooperation will focus on reinforcing European AI research centres, creating synergies in research, development, and investment (R&D&I) funding schemes across Europe, and exchanging views on the impact of AI on society and the economy. Member States will engage in a continuous dialogue with the Commission, which will act as a facilitator.

How does artificial intelligence work?

Artificial intelligence makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Most AI examples that you hear about today rely heavily on its central area – machine learning, which we have already written about. Otherwise, artificial intelligence and machine learning cover a wide range of technologies, such as deep learning, the Internet of Things (IoT), data science, etc.

Artificial intelligence works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data. Why do we even want machines to make decisions automatically?

Machines that make decisions based on available facts have some distinct advantages over humans, according to the KD Nuggtes website. Unlike humans, machines do not have biases. If we are talking about pure, logic-based decision making, these biases can sometimes get in the way. For example, a machine would not make a sub-optimal decision just because it was angry or in a bad mood. The aspect of productivity is also essential. Namely, the machines do not need breaks, sleep, take vacations, or go on sick leave.  

In addition, machines have much higher computing power; they can enumerate all the different permutations and combinations of relevant factors and compute resultant values for them to come up with the most optimal decision. Machines also don’t make unintended ‘human errors’; for example, they don’t accidentally overlook any data or anything like that. Once programmed, the machine will do things exactly as imagined until it breaks down.

In most cases, the fear results from fear of the unknown and the new. Photo: Getty images.
In most cases, the fear results from fear of the unknown and the new. Photo: Getty images.

Should we be afraid of it?

As we mentioned initially, the biggest fear when we talk about artificial intelligence is that machines would get more intelligent than us and rule the world. This famous Hollywood notion of evil artificial intelligence is causing concern among the general public about developing smart systems technologies. In most cases, the fear results from fear of the unknown and the new.

However, another great fear of artificial intelligence stems from the idea of mass layoffs of human workers. Many fear that intelligent machines will ‘take’ their jobs, but even these machines, and the parts for them, need to be made, later operated, maintained, and so on. And while technology may replace humans in many industries, experts believe new industries and disciplines will emerge, resulting in more new jobs. However, it may be necessary to retrain workers and acquire new knowledge.

In future articles, we will discuss in more detail which industries artificial intelligence is already being actively used in, how it makes our lives easier, and in which areas we can still exploit its potential. In addition to practical examples of use, we will also look at which country is a leader in this field and the situation in Slovenia and the European Union.

Author: Rok Žontar

Keywords: AI, history, technology, European Union.

Disclaimer:

This article is part of joint project of the Wilfried Martens Centre for European Studies and the Anton Korošec Institute (INAK) Following the path of digitalization in Slovenia and Europe. This project receives funding from the European Parliament. 

The information and views set out in this article are those of the author and do not necessarily reflect the official opinion of the European Union institutions/Wilfried Martens Centre for European Studies/ Anton Korošec Institute. Organizations mentioned above assume no responsibility for facts or opinions expressed in this article or any subsequent use of the information contained therein.