One of the first things I have come to realize while researching Artificial Intelligence (AI) is the difficulty to define it. That is why I would like to delve a bit deeper into the basics so both the reader and I can be on the same page.

According to the Wikipedia, AI is intelligence demonstrated by the machines, in contrast to the natural intelligence displayed by humans and many animals. A typical AI perceives its environment and takes actions that maximize its chance of successfully achieving its goals. These goals can be simple, or complex, can be explicitly stated or remain implicit in the problem, and they will usually depend on the task the AI is trying to solve.

At this point we need to make a distinction between AI and algorithms, and it may get a bit messy due to what is known as the “AI effect” which can be represented as follows: As soon as AI successfully solves a problem, the problem seems to no longer be a part of AI.

For example, when IBM’s chess playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used “brute force methods” and it wasn’t real intelligence. Fred Reed writes:

“A problem that proponents of AI regularly face is this: When we know how a machine does something ‘intelligent,’ it ceases to be regarded as intelligent. If I beat the world’s chess champion, I’d be regarded as highly bright.”
Fred Reed (2006–04–14) “Promise of AI not so bright”.

So let us back up; we have algorithms, which are a set of unambiguous instructions that a mechanical computer can execute, and that do not imply any use of “intelligence” because every set of possibilities have been considered, and every response to them has been hardcoded. This is how Deep Blue beat Garry Kasparov; every time the human player made a move the computer considered every possible move up until a certain depth, ranked them according to how likely they were to succeed and executed the move with the highest likelihood of giving the computer an advantage towards winning the game.

This is where a very interesting new concept comes in; “Machine learning”, which refers to the ability that AI has to “learn” new things. In the case of Deep Blue, unless changes were made to its code, the a priori chances of beating Kasparov the first time were the same as beating Kasparov after a thousand games, assuming that Kasparov’s level remained the same.

That is not the case anymore, the best chess AIs nowadays have the ability to learn every time they play, therefore, the initial algorithm tends to evolve to the point where the AI becomes a black box and its moves are unpredictable even for the people that have programmed it. With learning we refer to gaining the abilities to execute certain strategies that were not hardcoded when the AI was first released.

And here another closely related concept enters the picture, Big Data: data sets so big and complex that traditional data-processing application software are inadequate to deal with them. These data sets allow, for example, for an AI to examine and learn from every chess game ever recorded in a matter of hours.

So suddenly, if we mix AI with machine learning and big data, we end up with a domain specific intelligence that is many times more advanced than human intelligence. This combination has led to AIs performing many tasks, not just chess, much better than its human counterparts such as driving, face recognition, pattern recognition…

Everything seems to point towards a future where AI outperforms humans in every task, which starts to take us closer to the subject of this article; the implications that ethics will have for AI. Because even if machine learning allows machines to develop their own way of thinking, we can still hardcode a set of rules, or boundaries that the AI will never be able to break. The same way we can let a kid do anything he wants inside a sandbox except: 1) Exit the sandbox, 2) fight other kids, and 3) eat the sand; we can let AI do anything it wants, except for a set of rules that it is always obliged to abide.

In that respect, one of the most famous formulation was made by Isaac Asimov with the three laws of robotics, which we can easily make extensible to all AI:

  1. An AI may not injure a human being or, through inaction, allow a human being to come to harm.
  2. An AI must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

The original laws apply to just robots, but we are not sure what kind of container will AI have in the future. For example, movies such as 2001: A Space Odyssey or Her have showed more ethereal forms for sentient AIs.

The problem with this laws is that there are some ethical problems were the AI in question has no choice but to break one of the laws, because it is forced to choose the least bad of two bad options. Probably the most illustrative of these problems is the Trolley Problem. The traditional thought experiment is presented as follows:

You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two choices:

1. Do nothing and allow the trolley to kill the five people on the main track.
2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the most ethical choice?

As you can see, this is a challenging situation which can be easily extensible to a self-driving car controlled by an AI. If the car is about to drive into a group of 5 and killed them all, should the AI steer the car and kill its passenger instead?

Source: http://moralmachine.mit.edu/

In this situation the computer must break Rule #1: An AI may not injure a human being or, through inaction, allow a human being to come to harm. His only options are either to injure the driver, or allow 5 people to come to harm through inaction.

Many possible solutions have been formulated for the Trolley Problem and its countless iterations, but, obviously, there is no definitive answer and I will leave it up to the reader to decide how she believes that the AI should be programmed. If you want to take the experiment a bit further, visit MIT’s great web page linked below the above image.

Another interesting angle is how AI seems to be more racist and sexist than human intelligence. When humans make decisions about hiring, or granting a bank loan, they’re more likely to be questioned about their judgement. But when it comes to AI, even if we were to try to guess what it based its decision on, we would be facing a black box; artificial neural networks just cannot explain their decisions.

We could try to mitigate this problem by telling the AI not to use racial data when granting loans, but there are many other correlated variables such as names (for example, in the United States, the last names Wei or DeShawn are highly correlated to Asian American and African American people respectively). Therefore, the AI could conclude the race and sex of the person using many proxy variables without ever taking into account those variables explicitly. And assuming that either race or sex are variables that determine the likelihood of, as an example, repayment of a loan, AIs would be more likely to deny loans or grant them in worse conditions based on racist or sexist biases.

A great article published on the award winning Canadian magazine, The Walrus, delves deeper into this problem:

“Let’s say we’re concerned about race as the factor of discrimination,” deep-learning pioneer Yoshua Bengio says. “Let’s say we see that, in our data, we can measure race.” Another constraint can be added to the neural network that compels it to ignore information about race, whether that information is implicit, like postal codes, or explicit. That approach can’t create total insensitivity to those protected features, Bengio adds, but it does a pretty good job.

The article adds:

A growing field of research, in fact, now looks to apply algorithmic solutions to the problems of algorithmic bias. This can involve running counterfactuals — having an algorithm analyze what might happen if a woman were approved for a loan, rather than simply combing through what’s happened in the past. It can mean adding constraints to an algorithm, ensuring that when it does make errors those errors are spread equally over every represented group. It’s possible to add a different constraint to the algorithm that lowers the threshold of, say, university acceptance for a particular group, guaranteeing that a representative percentage gets in — call it algorithmic affirmative action.

The problems mentioned above are just the tip of the iceberg, a small subset of issues we are currently facing. If we decide to think about the year 2100, after singularity and with sentient robots everywhere , the ethical problems become almost intractable from our current perspective:

All these questions make it obvious that philosophers, judges and ethics professors, among other experts, will become more relevant in the AI field. In the meantime, it is up to us to keep learning about AI to try to transition smoothly to the new world where AIs will be making most decision, and hope that we do not end up in a distopyan future such as the one in Alphaville, one of my favorite sci-fi movies.

Copyright Ⓒ 2021. Todos los derechos reservados. Collective Academy, S.A.P.I. de C.V.
crossmenu
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram