Résultat de recherche d'images pour "westworld ai"

The risks of Artificial Intelligence

As you probably know, Artificial Intelligence (AI) is revolutionizing the world in multiple ways. Nowadays, it offers considerable improvements in productivity and efficiency in a large range of industries. Breakthrough innovations such as self-driving cars or some amazing softwares (e.g. AlphaGo1) could have never been possible without the implementation of AI.

How to define Artificial Intelligence?

There are many ways to describe such an entity. However, most experts agree that AI is defined as a program that can perceive its environment and take actions in order to maximize the chances of successively reach its goal. For example, AI is used in video-games to generate dynamic purposeful behaviour in non-player characters.Résultat de recherche d'images pour "artificial intelligence subset deep learning"“A breakthrough in machine learning would be worth ten Microsofts.” Bill Gates

Are machines able to learn?

Yes, machines can surprisingly learn from its own experience and from other machines’ experiences. In contrast, the fact of systematically learn from others’ experiences is not possible for humans. The experience enables machines to predict outcomes in the future using statistical techniques which they learn from the predictive structure of collected data (i.e. Machine Learning). A good example is Netflix, which can learn from your behaviour on the website and then offer you some new potentially appropriate contents.

Can machines learn like us?

The icing on the cake is the fact that increasingly more machines are now processing data in a way which is inspired by our basic understanding of the human brain; the interconnections between neurons (i.e. Deep Learning2). For instance, AlphaGo is the first program in history to defeat a Go3 world champion, namely Lee Sedol, winner of 18 world titles and considered to be a legendary player. The computer program beat the Korean champion 4-1 by using Artificial Neural Networks.Résultat de recherche d'images pour "alphago"I thought AlphaGo was based on probability calculation, and that it was merely a machine. But when I saw this move, I changed my mind. Surely, AlphaGo is creative. This move was really creative and beautiful.” Lee Sedol, Go world champion

In fact, a well-designed program can do things in a considerably better way than humans do in many areas. AlphaGo is only one example amongst many others.

So what are the risks?

Most of the AI experts are expecting two likely scenarios:

AI is programmed to do something devastating.

Nowadays, powerful states are using technology against their enemy. An effective way to gain military efficiency is to use autonomous lethal weapons4 (e.g. drones, tanks). It is commonly agreed among the AI community that competitive pressure on such military developments, which would result into an arms race, is likely to lead the world to an AI war followed by mass casualties. Such terrible new weapons will certainly not be easy to “turn off”.

AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal.

The goal of a machine should take into account all the elements humans care about. Otherwise, we have a problem. A good example from Elon Musk5 is the following; you are a hedge fund or private equity fund, and you want AI to maximize the value of a portfolio. This AI is programmed to maximize the chances of reaching its goal. Therefore, what it could potentially do is to short consumer stocks and long defence stocks, and consequently, start a war. The same kind of cause-effect scenarios with misaligned human-machine goals could be imagined in many areas, not only finance (e.g. transportation, security).

Résultat de recherche d'images pour "ai finance"It might be possible to set up a situation in which the optimal way for the agent to pursue these instrumental values (and thereby its final goals) is by promoting human welfare, acting morally, or serving some beneficial purpose as intended by its creators. However, if and when such an agent finds itself in a different situation, one in which it expects a greater number of decimals of pi to be calculated if it destroys the human species than if it continues to act cooperatively, its behavior would instantly take a sinister turn.” The superintelligent will: motivation and instrumental rationality in advanced artificial agents (Oxford University6).

Let’s finish with a nice overview of common myths about AI versus the reality.

Résultat de recherche d'images pour "myths about artificial intelligence"Systems built from poorly understood heuristics might be capable of creating or attaining superintelligence for reasons we don’t quite understand—but it is unlikely that such systems could then be aligned with human interests.” Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda (MIRI7)

Jonathan Malatialy