-----8<----- Cryo Revival Requests draft, 0.2.0 dated Tuesday, the 27th of September, 2016 AD. Greetings from 2016! I write this letter to a particular audience (though I don't mind anyone else reading it). Specifically, I hope that you are reading this if: 1) I have died, 2) my body has been cryopreserved, 3) new technologies have continued to be developed, and 4) you are considering whether to try using a particular method to revive me. My goal here is to describe some of my preferences about being revived. I'm hoping to go into enough detail, and with enough explanation of my reasoning, that you can extrapolate what choices I would make in circumstances I can't guess about today. To start with, there is a simple, fundamental fact: there are things that I value. Slightly less simple, nobody else values the same things I do, in quite the same way I value them. Which means that the one person who I can trust the most to work towards those values (and any reasonable extrapolations of them) is myself. If I permanently die, I can no longer do that, which is one of the main reasons I've signed up for my body to be cryonically preserved, despite my best estimate that there's still a 96% chance that I will never be revived. That remaining 4% chance of being revived and being able to personally continue to work towards my values is worth the costs. Trying to be revived is one example of an "instrumental goal", a method of achieving my "terminal goals". There are a great many different instrumental goals which are worth working for, no matter what your terminal goals are. A rule-of-thumb that I use is to consider the terminal goals of "keeping myself alive into the indefinite future" and "keeping any sapience at all alive into the indefinite future"; anything I can do that would help with both of these goals, one the epitome of selfishness and the other the epitome of altruism, is nearly certainly worth working at. While I do value my own life, I don't understand consciousness very well. I at least have a general idea of every level of how a computer works, from electrons through logic gates and memory cells to software, to the level that I could make a reasonable attempt to build a computer from scratch; but I don't know how the cells of a brain produce the subjective sensation of qualia. Which leads me to want my own consciousness to continue. I've thought about how much I value my life, and I honestly don't know what I would be willing to trade it for. I'm reasonably confident about some extremes, such as accepting a small increase in the odds of my permanent extinction if that would greatly increase the odds that some sapience would continue to exist; but I have trouble figuring out what tradeoffs I find acceptable today, let alone guessing what tradeoffs I would find acceptable tomorrow. Another important instrumental goal is one that my more libertarian friends prize very much: that when I'm working towards my values, and you're working towards yours, it's often possible to come to an agreement that helps both of us achieve our goals. Put another way, we can have "positive sum interactions", even if I happen to be dead at the time. The most obvious thing that you can do for me is bring me back to life, and I've been able to think of three ways that I may be able to help you. First, and most generally, my experiences differ from those of the average person today to the degree that, sometimes, I can be the first to start working on a useful idea. Secondly, somewhat more personally, in present-day terms, I prefer being by myself to the point that I could be described as having "schizoid personality disorder", which could let me be productive in isolated circumstances that people who desire more, and more direct, social contact than I do would find uncomfortable. And thirdly, if you are considering experimental methods to create a software emulation of a human brain, I am willing to accept a certain amount of personal risk that I could be permanently killed when I might otherwise be revived, by giving advance permission for my own mind to be the subject of such experiments, as long as certain common-sense preconditions (described below) are met. I am fully aware that since I will be dead, these requests and preferences will have no legal force under current law, and I do not expect the law to change in a way that will give them legal force. I am also aware that if you wish to disregard my preferences, there is nothing I can do to stop you. If you wish to revive me in a manner that does not meet the standards I'm about to describe, I urge you to take a significant amount of time to consider whether such a revival truly is the most moral and ethical choice you can make. If you are having trouble determining how to tell whether your choice is ethical, then I suggest that you use the twin standards of wanting to maximize the odds of your own long-term survival, and wanting to maximize the odds that any sapience at all survives into the indefinite future, as your guides. I believe that the "information" theory of identity is a more useful guide than theories of identity which tie selfhood to a physical brain, or to uninterrupted continuity. I understand that all choices contain risk. My specific goal with these requests, preferences, and standards is simple: to try to maximize the odds that I will be alive at any point in the indefinitely-distant future. If you have data that I don't, and can perform Bayesian updates on Fermi estimates, then you can probably make a better guess than I can today about what choices I would make to maximize my survival odds if I did have access to the same data and analysis techniques. * I want to live indefinitely, not just live, die, live again, and die permanently. I would prefer to be revived if the technological and economic means are in place so that a second death can be dealt with, such as by being ready to preserve me again if necessary. * If you haven't figured out how brain cells produce minds, then I would prefer to be revived as a living biological being than to have my mind uploaded into software form. * The possibilities if a hostile agent seizes control of the software and hardware that a human mind is running on are horrifying; one of the simplest is editing minds without their consent. Given what I understand about software development, one of the few effective ways to reduce software vulnerability to such attacks is for it to be made free and open-source, in a manner that allows anyone to inspect it for bugs. I've weighed the options, and if the choice is between uploading my mind into software that runs on non-free, closed-source software, and leaving my body cryopreserved, then I would reluctantly choose to remain frozen, even if doing so increases the risk I'll never be revived at all. However, if the uploaded mind derived from my brain has access to the closed-source software and its documentation, and enough time to inspect it, then I would choose uploading. * If it turns out to be impossible to revive me in any fashion that could be considered a future version of myself, but it is possible to create a new person which shares some portion of my mind, skills, and/or memories, then I'd accept the creation of that person as being preferable to not creating them, to increase the odds that someone who shares my values will continue to exist and work towards them. * I suspect that there will be certain advantages to be one of the first minds turned into software, and certain disadvantages; and I'm willing to try to gain those advantages, if the risks of those disadvantages can be minimized. With that gola, the preconditions I've thought of to have my pre-approval for experimental uploading include: - All the typical institutional ethical standards for human experimentation are met (excluding any that would prevent any human mind-uploading at all). - I value my privacy, but I value my long-term survival more. To reduce the odds that one uploading group's failure leads to my mind being unrecoverable, at least three separate copies of all the data describing my brain should be stored in physically separate locations (including checksums, redundancy, and all the other techniques to deal with bit-rot and other forms of data degredation), and be made available to other groups using different methods to try to reconstruct my mind. - Any software used to reconstruct or run the emulated mind is free and open-source, so that at least the uploaded mind (and preferably people in general) can checked for and repair any bugs, malware, and other issues. - To make sure that the uploading technology meets minimal standards for successfully creating a human mind, it should have successfully worked on other mammalian brains, including rats and chimpanzees. - The personal data which I have requested in my will to be stored by my cryonic service provider in a "perpetual storage drawer" should also be stored with and made available to the uploaded mind. - A neutral third-party, outside the group doing the uploading, should be used as an arbitrator to determine whether these requests are being met, and that the uploaded mind receives the benefits of any other moral or legal standards that exist at the time, such as what today are called fundamental human rights. In case it may be of use in verifying whether my revived self still has my memories, here is the SHA512 checksum of a passcode I am very likely to remember, and am very unlikely to say (or even describe) before I die: aadc957d0e770b792d40b5532e5dab5f7701faab211b80e9b0179119c01e89d0a6fc5cc3e6dafacd52d334b05b2bf300a75ec683796a464f010de89fbab106d1 The following points are extremely minor compared to the above, and if any of them become relevant at all, then I expect that I will be more than happy even if they are ignored. - I have relatively little emotional attachment to my current form, and would prefer to be healthy and fit than be shaped exactly as I was at my death. - I greatly enjoy reading, writing, and hiking. As long as I am still able to do those, then I suspect that I would be as happy experiencing the shape of a female centauroid rodent in virtual reality or an asexual robot on Mars' moon Phobos as I would a male human on Earth. - My favorite colors tend to be the blues, grays, and whites of the sky. - One of my favorite music albums is Jon Anderson's "Olias of Sunhillow", and I wouldn't object to it being played as I wake up after being dead. (As this is a preliminary draft for discussion purposes, I won't print or sign it, and am likely to continue to edit various details. Once I have worked out the more obvious flaws, I will use Gnu Privacy Guard to make a digitally signed copy, in a fashion that can be verified with a publicly-hosted decryption key, and make a physical copy to be stored with my other papers by my cryonic service provider.) ----->8-----