AI Ethics and the Continuum of Sentience

in #ai7 years ago

 Some years ago I gave a talk called “Consciousness and the Transhuman”,  in which I discussed important connections between questions of  neuroscience, advancing cognitive technologies, human augmentation and  animal rights. Around the same time, philosopher David Pearce was  beginning to popularize the concept of the “Hedonistic Imperative”,  being a drive to eliminate animal and human suffering through  technology. Now I would like to move the conversation forward with the  introduction of a new conceptual tool: The Continuum of Sentience. 

 First, let us begin with a point of reference, which is to say the  nature and significance of sentience. For our purposes here, let’s  define sentience as (the capacity for) subjective perception and  experience. Ethics are irrelevant in the absence of sentience,  as all known ethical systems are based upon the avoidance of unnecessary  or unjustified suffering, and only sentient beings are capable of  suffering. For example, it is impossible to act unethically toward an  inanimate object per se, although it may be possible for one’s  actions toward an inanimate object (e.g. stealing or destroying it) to  be unethical if they may cause suffering to sentient beings. Therefore  our point of reference is zero sentience, which is also the point at which ethical rules do not apply. 

 From that point on, things get complicated. There are arguably different  degrees and types of sentience, suggesting different degrees and types  of ethical implication, and we must understand them if we wish to act  ethically in a world of rapidly developing AI technologies (or indeed to  act ethically toward any living thing). The Continuum of Sentience  (CoS) is a single, broad measure of multiple correlated phenomena, being  physiological and behavioural complexity, subjective experience,  capacity for suffering, degrees of consciousness, and arguably life  itself. Following the principles of good science, we should not rely  solely on one type of observation when assessing an entity’s degree or  type of sentience. Our understanding of the relationship between  cognitive abilities and physiological structures or system architectures  may be incomplete, and any entity’s reports of their subjective  experience may be misleading. By observing and correlating multiple  measures of (1) physiological similarity to known cognitive  architectures, (2) behaviour, and (3) subjective report, we can develop  an increasingly reliable overall measure of sentience. 

 But what are these “degrees and types of sentience”? How can sentience  be anything other than a unitary phenomenon, simply existing or not?  “Degree” is a question of the characteristics associated with a  particular sentient process. Philosophers such as Daniel Dennett and  Thomas Nagel have long noted that conscious awareness has content, which is to say that in order to be aware you must be aware of something. We  may therefore consider a perceptual process representing richer content  (e.g. high-resolution colour images and audio, versus low-resolution  grayscale with no audio) to be “more sentient” than a less rich one,  although perhaps the proper or more accurate terminology would be  “content-rich sentience”. This essential level of sentience, no matter  how content-rich, does not necessarily require reflexive consciousness,  awareness of one’s own mental contents, sapience or capacity for  explicit logical reasoning. 

 Such “higher forms” of sentience are the different types referred to  earlier. The most advanced forms of intelligence that we currently know  are capable of complex reasoning and linguistic ability, and such  capabilities go hand-in-hand with historical terms such as “sapience”  and “consciousness”. Unfortunately such terms are operationally  ill-defined (a simple fact which has given rise to entire literatures of  debate), and so for the purposes of the CoS we will refer only to higher sentience types (HST), defined by specific characteristics, capacities and mechanisms. The most fundamental HST mechanism is recursive processing, also known as metarepresentation and Higher Order Thought (HOT)  in the psychological literature. The idea is that some systems are  capable of representing some part of their own inner workings, and that  such metaknowledge is the basis of self-awareness. Humans have a  tendency to imagine that their metaknowledge is more or less total,  when it most emphatically is not, and much of our own neurological (and  arguably cognitive) activity is opaque to us. 

 To summarize, the Continuum of Sentience ranges from entities with the  barest glimmerings of perceptual awareness on the one hand, to beings  capable of rich phenomenological content, self-awareness, deep access to  their own cognitive resources, complex linguistic and reasoning powers  on the other. Furthermore, the Continuum also acts as a measure of ethical responsibility  for all who would seek to avoid causing suffering to others. Of course  one may decouple ethics from suffering and claim that it may be ethical  to cause suffering to highly intelligent and aware organisms, but such a  position is rarely held (or at least made explicit) in the world today.  Arguments that certain levels or types of suffering may be justified  under particular circumstances are tangential to my purpose here today,  which is simply to introduce a conceptual tool for considering the  cognitive capacities and associated rights of any intelligent system or  organism. 

By Amon Twyman
Co-published on Transhumanity.net  

Sort:  

Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
http://transhumanity.net/ai-ethics-and-the-continuum-of-sentience/

This post has received a 1.90 % upvote from @booster thanks to: @zero-state.

Interesting
I will follow you to see your future posts! +vote