LABMP 590: Technology and the Future of Medicine

CCIS L1-140, T R 2:00 - 3:20 pm

 

November 27    Humans and Intelligent Machines: Co-evolution, Fusion, or Replacement 
                                        

David Pearce and Kim Solez

PowerPoint Presentations:

David Pearce: Humanity Successors, Our Descendants

Additional Links:

Here is the video for today's teaching session with David Pearce http://www.youtube.com/watch?v=wyBG2y7CGrI  What we will do in class today is start at approximately 12:03 "Actually there is a human friendly ... and play to "such short time scales" 13:38 followed by discussion followed by further playing of the video until a hand is raised for questions and so forth. This will be a highly dynamic approach to the material that should keep everyone alert and on the edge of their seats for the full 80 minute class period.

Here from http://www.biointelligence-explosion.com/parable.html  is the section that immediately precedes where we will be starting (in italics), and then that section itself (no italics):

1.1.1. What Is Coherent Extrapolated Volition?

The Singularity Institute conceive of species-specific human-friendliness in terms of what Eliezer Yudkowsky dubs "Coherent Extrapolated Volition" (CEV). To promote Human-Safe AI in the face of the prophesied machine Intelligence Explosion, humanity should aim to code so-called Seed AI, a hypothesised type of strong artificial intelligence capable of recursive self-improvement, with the formalisation of "...our (human) wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted."

Clearly, problems abound with this proposal as it stands. Could CEV be formalised any more uniquely than Rousseau's "General Will"? If, optimistically, we assume that most of the world's population nominally signs up to CEV as formulated by the Singularity Institute, would not the result simply be countless different conceptions of what securing humanity's interests with CEV entails - thereby defeating its purpose? Presumably, our disparate notions of what CEV entails would themselves need to be reconciled in some "meta-CEV" before Seed AI could (somehow) be programmed with its notional formalisation. Who or what would do the reconciliation? Most people's core beliefs and values, spanning everything from Allah to folk-physics, are in large measure false, muddled, conflicting and contradictory, and often "not even wrong". How in practice do we formally reconcile the logically irreconcilable in a coherent utility function? And who are "we"? Is CEV supposed to be coded with the formalisms of mathematical logic (cf. the identifiable, well-individuated vehicles of content characteristic of Good Old-Fashioned Artificial Intelligence: GOFAI)? Or would CEV be coded with a recognisable descendant of the probabilistic, statistical and dynamical systems models that dominate contemporaryartificial intelligence? Or some kind of hybrid? This Herculean task would be challenging for a full-blown superintelligence, let alone its notional precursor.

CEV assumes that the canonical idealisation of human values will be at once logically self-consistent yet rich, subtle and complex. On the other hand, if in defiance of the complexity of humanity's professed values and motivations, some version of the pleasure principle / psychological hedonism is substantially correct, then might CEV actually entail converting ourselves into utilitronium / hedonium - again defeating CEV's ostensible purpose? As a wise junkie once said, "Don't try heroin. It's too good." Compared to pure hedonium or "orgasmium", heroin isn't as fun as taking aspirin. Do humans really understand what we're missing? Unlike the rueful junkie, we would never live to regret it.

One rationale of CEV in the countdown to the anticipated machine Intelligence Explosion is that humanity should try and keep our collective options open rather than prematurely impose one group's values or definition of reality on everyone else, at least until we understand more about what a notional super-AGI's "human-friendliness" entails. However, whether CEV could achieve this in practice is desperately obscure. Actually, there is a human-friendly - indeed universally sentience-friendly - alternative or complementary option to CEV that could radically enhance the well-being of humans and the rest of the living world while conserving most of our existing preference architectures: an option that is also neutral between utilitarian, deontological, virtue-based and pluralist approaches to ethics, and also neutral between multiple religious and secular belief systems. This option is radically to recalibrate all our hedonic set-points so that life is animated by gradients of intelligent bliss - as distinct from the pursuit of unvarying maximum pleasure dictated by classical utilitarianism. If biological humans could be "uploaded" to digital computers, then our superhappy "uploads" could presumably be encoded with exalted hedonic set-points too. This conjecture assumes that classical digital computers could ever support unitary phenomenal minds.

However, if an Intelligence Explosion is an imminent as some Singularity theorists claim, then it's unlikely either an idealised logical reconciliation (CEV) or radical hedonic recalibration could be sociologically realistic on such short time scales.

The other times of special interest are these 44.22 section 4.2. The Binding Problem.
Are Phenomenal Minds A Classical Or A Quantum Phenomenon? 47.22 4.3. Why The Mind Is Probably A Quantum Computer. and 56.15 4.5. The Infeasibility Of "Mind Uploading"

 

This is optional background viewing for Tuesday's lecture and discussion by David Pearce:

 
http://www.youtube.com/watch?v=CCy5guYcgVM
 
http://www.youtube.com/watch?v=nMMowV6BEvc
 
http://www.youtube.com/watch?v=w3hrbvK5qUU
 
http://www.youtube.com/watch?v=24snPEV8qSI
 
http://www.youtube.com/watch?v=VuTCquCqR0I
 
http://www.youtube.com/watch?v=qjfv4QJxwQM
 
http://www.youtube.com/watch?v=T-rXxzlfhaI
 

Life is full of surprises! David Pearce has sent us a 9,500 word document at http://www.biointelligence-explosion.com/parable.html for Tuesday's teaching session November 27th at 2 pm in CCIS L1-140 and then another 700 words of chat text interaction with me below. Please read as much of this as you can and join us for what I am sure will be a fascinating teaching session on Tuesday!
 

 

November 29          Renewing Our Commitment to Progress
                                        

James Hughes and Kim Solez

PowerPoint Presentations

James Hughes: Renewing our Commitment to Progress

 

Reading :

 
The video is at:


http://ieet.org/index.php/IEET/more/renewingcommitmenttoprogress20120502


It is 18 minutes. After that we will have discussion, then as much of this second video as seems feasible:


http://www.youtube.com/watch?v=rDWCN_bQfc8

Minutes 20:04-58:49 The Mythopoetic Meme, Technology, and Human Happiness. The views expressed here contrast nicely to those of David Pearce.

 


In the Singularity One on One interview http://www.youtube.com/watch?v=rDWCN_bQfc8  the discussion of James Hughes' book Citizen Cyborg starts at 21:05 with a specific reference to David Pearce at 22:46. That is where we will start in the second half of the class session tomorrow.

 

David Pearce writes in response:

"If it was possible to become free of negative emotions by a riskless
implementation of an electrode - without impairing intelligence and 
the critical mind - I would be the first patient."
Dalai Lama (Society for Neuroscience Congress, Nov. 2005)

There would be strong selection pressure against any tendency to wirehead  - or the pharmacological and genetic counterparts of wireheading  - in any crude sense of "wireheading".

But in the coming era of "designer babies", will there will be selection pressure against hedonic set-point recalibration, i.e. a genetic predisposition to life animated entirely by information-sensitive gradients of well-being? 
This is much less clear. IMO life's future is "hyperthymic".

James and I do disagree over the extent to which radical improvements of the human [and indeed nonhuman animal] condition can be achieved by social and technological progress alone _in the absence of_reward pathway enhancements. But I have an immense respect for Buddhists and Buddhism, not least the tenet of ahimsa [recall James is a former Buddhist monk] 

"May all that have life be delivered from suffering" Gautama Buddha. Was Buddha a proto-negative utilitarian?
High technology gives us the capacity to make Gautama Buddha's dream a reality....

Dave


 

Back to the Top

Back to Home Page

For further information:


 Preeti Kuttikat          or            Kim Solez, M.D.

780-407-8385 banffap@ualberta.ca                      780-710-1644 Kim.Solez@Ualberta.ca


 

 

 

 

 

 


Last Modified: Monday December 03, 2012 10:52:49 AM kim.solez@ualberta.ca