Installing Ubuntu Linux on Terminator T-800, HOWTO.

in #funny8 years ago (edited)

Before to start with a post about Artificial Intelligence, I apologize for my english, since I am not a native speaker. 

This is not about technology itself. I will probably write more about Kohonen-like networks  in the future. This is about... trying to explain AI to my mother. I hope she will be a little less anxious. (which is impossible, given the fact she is an italian mother)

Anyhow, this  is about explaining the reason nobody in the AI field will never, ever, build Skynet.

Have you read about the Artificial Intelligence being one of the bigger threats for the humankind? Of course. Have you read about how machines could decide to exterminate us, boring humans? Of course. Stephen Hawkins said that, right? Well, no. He never said that. Actually he did quite a long reasoning about military utilization of the AI , but this was "cherry picked" by the press .

Then... la laaaaaa. Here I am. You sort-of-summoned me. I am one of the crazy people which is doing this terrible things to the humanity. It is my fault (too)  the first AI we build will become self-conscious, will realize  we are the people which allows  Miley Cyrus to exist, and of course it will go for extermination. (What else? Miley Cyrus, you know?)

Joking apart, I think this "fear of the AI" is a bit out of control. And this is , in my opinion, for two main reasons:

  1. Hollywood. Many (good) movies about Artificial Intelligence doing this and that.
  2. Ourselves. We are pretty unable (failing) to explain what AI actually is. 

Now, I am not sure It makes sense to blame Hollywood to produce very emotional movies. Really: I liked Terminator, at least the idea behind it. (sort of). I liked The Matrix, too (sort of). It's their job to do frightening movies, so I would not blame them because of that.  

 If we need someone to blame, we only need a mirror. 

Let's give it a try to fix it. What Artificial Intelligence is?

Well, if we go back to history, Plato was thinking that the most clear evidence of a person being intelligent was the capability of doing mathematics. This seems ok, until we don't remember what Plato was calling "mathematics" is something most of our calculators are able to do. Right in our smartphones. Even if we go ahead with mathematics and we go with algebra and so, programs like Mathematica, MathLab and more can do almost everything Plato had in mind. Even if we put proofing theorems, I'm sorry to say many Proof Assistants (LeCoq, Matita, Lean, HOL, and more) can do much more than Plato had in mind. Still, is hard to say our laptops are "intelligent". Plato would say that, by example.

Later, playing Chess took the place of this "definition". Until we had machines able to play Chess, so that, even this was outdated. Seems a "cat-n-mouse" game, right?

Why I mention that? Because when we talk about "intelligence", we are assuming that:

  1. If you are human you are capable to be intelligent.
  2. If you are capable to be intelligent, you are somehow human, or human-like.

This is the main reason most of us thinks that "intelligent" means , more or less, something that "only humans can do". So that, actual devices able to make decisions (even better than our ones) are named "smart" and not "intelligent".  In general, when a machine is able to do something which was human-only before, people stops thinking at this activity as "intelligence", and we call the machine "smart".  Siri is not intelligent: it is a "smartphone". Smart. Not "intelligent".

Even more important, this two assumptions are driving people to think  a machine which is "intelligent" will look like a human being, will talk like a human being, will be self-conscious, will have "feelings" and will take terrible decisions. Because this is what humans are doing. 

Because the main bias about "intelligence" sounds like  " intelligent means... like us". 

On top of this stack of mistakes, there are other narratives. Like the Transhumanist one, which is doing questions about the "Singularity". The singularity is defined as an Artificial Intelligence able to do what our mind is doing, but better (or more. Not clear).  (In theory this definition covers any machine which is able to manage numbers, by example, like a calculator. We cannot do computing at such a speed. Human brain is terrible with numbers. )

Since the assumption is that "intelligent" = "human", then "more intelligent than human" means "more human than human", so I can understand why people is concerned.  In my opinion here the issue is the bias : the bias of being able to think "intelligence" only when associated to "human being".

Now, let's go back and check what we do actually in AI field. How I would define AI to my mother?  Talking machines? (bad) Decision making? Creativity?

I would define Artificial Intelligence the ability to mimic functions normally associated to the human specific behavior. This is my personal opinion.

One example is Computer Vision. Anyone can build a cheap camera today, but this is not "Vision". Vision is , more or less, when you know that some piece of color is actually a pen, below you have a table, and the pen is on the table. Maybe you think this is done more with eyes than with brain... I'm sorry to say, most of the operation we call "to see" is made by the brain. And this is quite a job. 

Guess what: Computer Vision was considered Artificial Intelligence, in the very period almost no machine was able to do it successfully. Now that some cars are able to understand there is another car in front of them, and to estimate the distance, then we consider Computer Vision not to be artificial intelligence.

Another example is Natural Language Processing. This is the ability to listen someone speaking, and get what he wanted to say, normally proven by providing a proper response back, in the same language, or at least some behavior which is consistent. 

At the beginning of Information Technology, artificial languages were that limited, everybody was thinking "machines may only process, while talking is for humans". Now people can buy commercial products which are able to talk , and answer properly most of times (more than some idiot I know, to be honest) , then is very hard to think Alexa or Siri are intelligent. It was easy to do when only humans were capable to do that.

Now is the turn of learning. Machine learning is the current frontier, just because right now to match the human capability to learn with (or without) examples is quite hard. We have some systems which can learn from examples , create information trees from examples, and some other system (i.e. Kohonen-like systems I am into)  are able to learn with no supervision, which means with no examples. 

So, when we go to sound , everybody work, what is exactly "Artificial Intelligence?".

Artificial Intelligence is a product. A product someone must sell.

This is the very point. This is the REAL point.

When I say "a product" I mean a machine which is purposely designed to follow a contract. You buy a broom under some implicit contract: the broom  is useful to clean your floor. You buy a car under the implicit contract, the car is supposed to move, to keep you alive , to be able to run on the streets we have. And so on.

If we try to imagine how car will improve, we take "the contract" and we say: "car will be able to make a better use of the streets we have, they will improve our security, they will move more people". 

Each and every thing an existing machine is capable to do, was implemented. Designed. And IT COSTED MONEY. If you want a machine which purposely decides to kill you, you must PAY for this function. Yep.

What we expect from Artificial Intelligence? What is the contract? 

  1. Artificial Intelligence will do for me something I cannot do alone.
  2. Artificial Intelligence will do for me something I do not want to do.
  3. Artificial Intelligence will do for me something better than I do.
  4. Artificial intelligence does not require to ask another person to do something in my place.

So, when a customer orders some "Artificial Intelligence" , what the order says is like: "this will make this boring reporting I must do every week". "This product will make better stock market predictions". "This AI will move the camera to zoom on the face of the criminal, after someone is doing a violent crime in some public space". It is a product, right? Must be useful.

A product is useful by contract. The only way to sell a product is to make something which, somehow, fulfills a contract. The result of this is, actually the machines we build are designed with the contract in mind. When we do some neural network in charge of doing scalping in the Forex market, we do not implement the capability to decide if the humankind is worth to exist or not. The customer wants to make money. This is what it will pay for. 

Until the AI is a product, built for people which expects the product to work in some ways, the product will do what the customer is paying for. End of story. Sure we could mention that the Army could be the customer, and they want killing machines, still the implicit contract is that "our killing machines aren't killing us". 

Here we are, now I know the next objection of you: self-consciousness. 

Because when something is "self-conscious" , it could "decide" we are stupid , rebel and then kill us. 

Here we go into the issue of "conscious". 

There is many people discussing the relationship between brain (wet-ware) and computing. By Example,  I would like to introduce you a guy, Henry Markram. ( https://en.wikipedia.org/wiki/Henry_Markram ). When you want to discuss about self-consciousness, he is one of the best people to do that. He got ~1Bn€ to build the first simulation of the whole cortex functions of the human brain, just to say. He did a pretty nice work thinking to "liquid state computing". 

Now, when you go reading works about "consciousness", and you do computing, what happens first is ... you get lost. You can read works from Winfried Denk, Timothy Bliss, and many other, to understand that...

  1. We are not alone in our brain. We have many "consciousness" inside.
  2. Most of the functions of human intelligence are ... something we would define "a personality".
  3. We have more than one image of ourselves, built by our brain.
  4. We have more than one idea of reality running in our brains.

Imagine we have a team in our brain. So there is a desk where a guy sits, and this guy names "James. E. Fear". This guy is in charge of "fear". It is his job. We would say this is quite a disturbed guy, always thinking in terms of fear. Actually James fears everything. If you talk with him, he will tell you how the floor is such a terrible threat to you. To don't mention the window. He knows terrible stories about windows. To make terrible stories is its job in the team. There are many other "people" in this team , each one has a precious function in our brain: you can say whatever about James, but, trust me, if you see a lion in front of you... do what James says! 

Nevertheless, if your girlfriend asks you to go into a restaurant, probably you should NOT listen at James: the nuclear mayhem he has in mind is not probably what will happen into the bistro you will choose. It's gonna be fun.

Joking apart, according with most of eminent scientists   a self-conscious brain is some kind of teamwork, where many different views , functions and thoughts and ideas are competing together , and the way they are feed and the way they play into the team will produce something you call "consciousness". 

And each of this pieces is complex enough to fit the "common sense" definition of "personality". Yes, James E Fear could look like  a " full person" if you could isolate its function alone. 

This is just to give an idea of the jeopardy a "self-conscious" brain is. 

At the current state of the art, "consciousness" is not something we can discuss about when talking about machines: the reason is, the humankind has a little knowledge of what "real" consciousness is, so that we cannot say what it actually is , enough to reproduce it.

Any discussion about "machines being self-conscious" is completely void, just because this term is not defined enough to be reproduced into a machine. The reason why we are not going to produce "Skynet" , defined as a self-conscious machine, is very easy:

  1. Even if we may be able to build such an hardware, we don't know how it should work.
  2. It is not a job for computer experts to understand what self-consciousness is: this is a job for other people.
  3. Nobody would buy some unpredictable machine taking any decision.

The first point is about science and engineering: you cannot build something when you don't know how is supposed to be. The second point is that information technology, even with its boost, cannot take the place of neurology, and the human brain is the only example of something we are SURE is "self-conscious". 

Even if it was built by accident, nobody would buy one. You cannot sell something saying to the customer "this device will cost you $100.000, and will do ... well... something. It depends. Maybe."

Putting all together, you can take your popcorn and watch again your Sci-Fi. 

It is not gonna happen.