Human Intelligence

in #technology8 years ago

Is it possible to think about a world where human interests are not the highest priority? Is such a world possible? Is technological progress and innovation possible if such developments are not (primarily) created for the service of humans? Should we want that? And what criteria can we use to test this progress?

Consider our growing awareness of the ongoing relationship we have with the planet, and the desire to keep this planet a viable place to live on. This consciousness grows, as we understand the consequences of our human actions on a large scale. But while our awareness grows, the damage we do to the planet increases.

Such paradoxes are not strange to us. We often want to do one thing whilst doing the opposite. Morality, conscience, empathy, sense of purpose; they are the dramatic elements in us that constantly overlook our failings. No one has said that we will succeed, but we try to “heroicise” our failure into a meaningful narrative, and package it up as a sign of our intransigence. And our failure is never a permanent failure, because hope always glimmers on the horizon. If there is one thing that no one can take away from us, it is our hope.

As a filmmaker and storyteller, I like to explore this dilemma. I am convinced that, regardless of our place in the scheme of things, our human experiences are worth articulating. This is because we happen to be the only ones undergoing this unique experience, and the only ones able to convey our thoughts to our contemporaries, and to future generations.

Thinking about an intelligence that is not characterised by failure is to think about the end of the human narrative. Not the end of the world. On the contrary. We were never very good at protecting our planet, but boy oh boy, what can we tell good stories about it.

Sort:  

Another great piece. A.I and immigration - topical!

Thanx for following Katiecruel