LABMP 590: Technology and the Future of Medicine
CCIS L1-140, T R 2:00 - 3:20 pm
November 27 Humans and Intelligent Machines: Co-evolution, Fusion, or Replacement
David Pearce and Kim Solez
David Pearce: Humanity Successors, Our Descendants
Here is the video for today's teaching session with David Pearce http://www.youtube.com/watch?v=wyBG2y7CGrI What we will do in class today is start at approximately 12:03 "Actually there is a human friendly ... and play to "such short time scales" 13:38 followed by discussion followed by further playing of the video until a hand is raised for questions and so forth. This will be a highly dynamic approach to the material that should keep everyone alert and on the edge of their seats for the full 80 minute class period.
Here from http://www.biointelligence-explosion.com/parable.html is the section that immediately precedes where we will be starting (in italics), and then that section itself (no italics):
1.1.1. What Is Coherent Extrapolated Volition?
The Singularity Institute conceive of species-specific human-friendliness in terms of what Eliezer Yudkowsky dubs "Coherent Extrapolated Volition" (CEV). To promote Human-Safe AI in the face of the prophesied machine Intelligence Explosion, humanity should aim to code so-called Seed AI, a hypothesised type of strong artificial intelligence capable of recursive self-improvement, with the formalisation of "...our (human) wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted."
Clearly, problems abound with this proposal as it stands. Could CEV be formalised any more uniquely than Rousseau's "General Will"? If, optimistically, we assume that most of the world's population nominally signs up to CEV as formulated by the Singularity Institute, would not the result simply be countless different conceptions of what securing humanity's interests with CEV entails - thereby defeating its purpose? Presumably, our disparate notions of what CEV entails would themselves need to be reconciled in some "meta-CEV" before Seed AI could (somehow) be programmed with its notional formalisation. Who or what would do the reconciliation? Most people's core beliefs and values, spanning everything from Allah to folk-physics, are in large measure false, muddled, conflicting and contradictory, and often "not even wrong". How in practice do we formally reconcile the logically irreconcilable in a coherent utility function? And who are "we"? Is CEV supposed to be coded with the formalisms of mathematical logic (cf. the identifiable, well-individuated vehicles of content characteristic of Good Old-Fashioned Artificial Intelligence: GOFAI)? Or would CEV be coded with a recognisable descendant of the probabilistic, statistical and dynamical systems models that dominate contemporaryartificial intelligence? Or some kind of hybrid? This Herculean task would be challenging for a full-blown superintelligence, let alone its notional precursor.
CEV assumes that the canonical idealisation of human values will be at once logically self-consistent yet rich, subtle and complex. On the other hand, if in defiance of the complexity of humanity's professed values and motivations, some version of the pleasure principle / psychological hedonism is substantially correct, then might CEV actually entail converting ourselves into utilitronium / hedonium - again defeating CEV's ostensible purpose? As a wise junkie once said, "Don't try heroin. It's too good." Compared to pure hedonium or "orgasmium", heroin isn't as fun as taking aspirin. Do humans really understand what we're missing? Unlike the rueful junkie, we would never live to regret it.
One rationale of CEV in the countdown to the anticipated machine Intelligence Explosion is that humanity should try and keep our collective options open rather than prematurely impose one group's values or definition of reality on everyone else, at least until we understand more about what a notional super-AGI's "human-friendliness" entails. However, whether CEV could achieve this in practice is desperately obscure. Actually, there is a human-friendly - indeed universally sentience-friendly - alternative or complementary option to CEV that could radically enhance the well-being of humans and the rest of the living world while conserving most of our existing preference architectures: an option that is also neutral between utilitarian, deontological, virtue-based and pluralist approaches to ethics, and also neutral between multiple religious and secular belief systems. This option is radically to recalibrate all our hedonic set-points so that life is animated by gradients of intelligent bliss - as distinct from the pursuit of unvarying maximum pleasure dictated by classical utilitarianism. If biological humans could be "uploaded" to digital computers, then our superhappy "uploads" could presumably be encoded with exalted hedonic set-points too. This conjecture assumes that classical digital computers could ever support unitary phenomenal minds.
However, if an Intelligence Explosion is an imminent as some Singularity theorists claim, then it's unlikely either an idealised logical reconciliation (CEV) or radical hedonic recalibration could be sociologically realistic on such short time scales.The other times of special interest are these 44.22 section 4.2. The Binding Problem.Are Phenomenal Minds A Classical Or A Quantum Phenomenon? 47.22 4.3. Why The Mind Is Probably A Quantum Computer. and 56.15 4.5. The Infeasibility Of "Mind Uploading"
This is optional background viewing for Tuesday's lecture and discussion by David Pearce:http://www.youtube.com/watch?
Life is full of surprises! David Pearce has sent us a 9,500 word document at http://www.biointelligence-
explosion.com/parable.htmlfor Tuesday's teaching session November 27th at 2 pm in CCIS L1-140 and then another 700 words of chat text interaction with me below. Please read as much of this as you can and join us for what I am sure will be a fascinating teaching session on Tuesday!
November 29 Renewing Our Commitment to Progress
James Hughes and Kim Solez
David Pearce writes in response:
Back to the Top
Back to Home Page
For further information:
Preeti Kuttikat or Kim Solez, M.D.
780-407-8385 email@example.com 780-710-1644 Kim.Solez@Ualberta.ca
|Last Modified: Monday December 03, 2012 10:52:49 AMfirstname.lastname@example.org|