Technology: Applying what we don't understandsteemCreated with Sketch.

in #technology8 years ago

know.th.png
In my deep learning research I have found a few instances of a sentiment that goes like this:

Here is a picture that shows how this works at a high level. You can research this more if you are interested in how it really works, but here is how you use it.

You find this in many areas of daily life:

  • I drive a car, yet I could never repair it myself, let alone build one
  • Most of us use smart phones and computers without a deep understanding of how they work. I have years of experience as a computer operator and programmer, and yet there are still aspects of the technology like chip architecture I could not replicate on my own
  • I flip light switches in my house and expect the lights to come on. If the wiring goes bad, I don't immediately know how to fix it

These examples are somewhat trivial, because most of us could come to understand how these things work and master them. However, there are new, transformitive technologies coming down the pipe that are difficult for even the most intelligent of our species to understand. The technologies themselves will transform society, but there will be another epistemological shift.

Do we need to understand how something works to trust that it will work?

I am thinking of AI specifically, but this is an issue in bio-medical tech, space exploration, and other areas as well. Consider the self-driving car as an example. Many expert predictions say that self driving, electric vehicles will dominate the market by the end of the 2020s. The coming economic implications are well explored in popular media, but what of the intellectual implications?

To be clear, I am for automation. I look forward to the potential of AI and related technologies to transform the world and create a post scarcity world. Yet I feel a nagging discomfort trusting AI beyond human intelligence. What does it mean when we can no longer debug and fix systems that we rely on? Will we be humble enough to admit that we need our AI systems to solve the hard problems for us? Will we trust them?

Consider my life with pets:

I have had dogs and cats. It has been my experience that when a dog is sick and needs to take a pill, it is willing. Dogs trust that we have their best interests at heart and that if we are giving them a pill, it will help them feel better. Cats on the other hand... they will fight with tooth and claw anything they don't understand. Cats know we provide food, shelter, and love. This does not change their attitude about things they don't understand. If they don't understand it, they don't trust it.

When AI surpasses our intelligence, what will we be: dogs or cats?