What does Elon Musk want? What is his vision for the future? These questions are hugely important because the decisions Elon Musk makes—unilaterally, undemocratically, in the relatively small bubble of unknown tech billionaires—will very likely impact the world you and I, our children and grandchildren, end up living in. there. Musk is currently the richest man on the planet and, if only because of that fact, one of the most powerful people in all of history. What the future wants to look like is, quite possibly, what the future of all humanity will end up being. That’s why it’s important to unravel the underlying normative worldview that has shaped his actions and public statements, from founding SpaceX and Neuralink to protesting that we’re in the midst of a “demographic catastrophe” due to underpopulation, to trying — but alas, failing — to buy Twitter, the world’s most influential social media platform. Musk gave us some hints about what he wants. For example, he says he hopes to “preserve the light of consciousness by becoming a space civilization and extending life to other planets,” although there are good reasons to believe that Mars colonies could lead to catastrophic interplanetary wars. which will probably destroy humanity. political theorist Daniel Deudney has argued convincingly in his book Dark Skies. Musk further states in a recent TED interview that his “motivating worldview or philosophy” aims to to figure out what questions to ask about the answer that is the universe, and as we expand the scope and scale of consciousness, biological and digital, we would be better able to ask those questions, frame those questions, and to understand why we are here, how we got here, what the hell is going on. And so, that is my driving philosophy, to expand the scope and scale of consciousness to better understand the nature of the universe. But more to the point, Elon Musk’s futuristic vision also seems to have been heavily influenced by an ideology called “long-termism,” as I argued last April in an article for Salon. While “long-term” can take many forms, the version Elon Musk seems most enamored with comes from Swedish philosopher Nick Bostrom, who runs the grandly named “Future of Humanity Institute,” which describes himself on its website that it has a “multidisciplinary research team [that] includes several of the world’s most brilliant and famous minds working in this field.’ Musk seems worried about underpopulation: He worries that there won’t be enough people to colonize Mars, and that rich people aren’t reproducing enough. For example, think back to Elon Musk’s recent tweets about underpopulation. Not only is he worried that there aren’t enough people to colonize Mars — “If there aren’t enough people for Earth,” he writes, “then there certainly won’t be enough for Mars” — he’s apparently also worried that rich people aren’t reproducing enough. As she tweeted on May 24: “Contrary to popular belief, the richer someone is, the fewer children they have.” Musk himself has eight children, so he proudly declared, “I’m doing my part haha.” Although the fear that the “less desirable people” could break out “more desirable people” (phrases that Musk himself has not used) can be traced back to the late 19th century, when Charles Darwin’s cousin Francis Galton published the first book on eugenics, the idea has recently been added to by people like Bostrom. For example, in Bostrom’s 2002 paper “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” which is one of the seminal papers on long-termism, he identified “dysgenic pressures” as one of the many “existential risks” facing humanity, along with nuclear war, dramatic climate change and our universe is a massive computer simulation shutting down – a possibility that Elon Musk seems to take very seriously. As Bostrom wrote: It is possible that advanced civilized society depends on the existence of a fairly large section of intellectually gifted individuals. Currently, there appears to be a negative correlation in some areas between intellectual success and fertility. If such selection were to work over a long period of time, we could evolve into a less intelligent but more fertile species, homo philoprogenitus (lover of many offspring). In other words, yes, we should be concerned about nuclear war and dramatic climate change, but we should be just as concerned about, to put it bluntly, less intelligent or less capable people breeding the smarter people. Fortunately, Bostrom continued, “genetic engineering is rapidly approaching the point where it will be possible to give parents the option to endow their offspring with genes associated with mental ability, physical health, longevity, and other desirable traits.” Therefore, even if less intelligent people continue to have more children than intelligent ones, advanced genetic engineering technologies could correct the problem by allowing future generations to create super-intelligent designer babies that are, therefore, even superior and of the greatest geniuses among us. This neo-eugenics idea is known as “transhumanism,” and Bostrom is probably the most prominent transhumanist of the 21st century so far. Since Musk hopes to “start the next stage of human evolution” by, for example, putting electrodes in our brains, it’s fair to conclude that Musk is also a transhumanist. (See Neuralink!) More recently, on May 24 of this year, Elon Musk retweeted another of Bostrom’s papers that is also fundamental to the long-term, perhaps even more so. Entitled ‘Astronomical Waste’, the original tweet described it as ‘Probably the most important paper ever written’, which is about the highest possible praise. Given Musk’s unique and profound influence on the shape of things to come, it behooves all of us—public, government officials, and journalists—to figure out exactly what the Bostromian long-termer’s grand cosmic vision, as we might call it, really is. My goal for the rest of this article is to explain this worldview in all its weird and technocratic detail, as I have written about this topic many times before and once considered myself a convert to the quasi-religious worldview to which it corresponds . The main thesis of “Astronomical Waste” draws its strength from a moral theory that philosophers call “absolute utilitarianism,” which I will abbreviate as “utilitarianism” below. Utilitarianism states that our only moral obligation—the goal toward which we must strive whenever a moral choice is presented—is to maximize total value in the universe, where “value” is often defined as something like “pleasant experiences.” When our universe finally sinks into a frozen lake of maximum entropy, the more value there was, the better off that universe would be. But how exactly do we maximize value? So every time you enjoy a good TV show, have a fun night out with friends, devour a good meal, or have sex, you are introducing value into the universe. When all is said and done, when the universe has finally sunk into a frozen lake of maximum entropy according to the second law of thermodynamics, the more value there was, the better off our universe would be. As moral beings—creatures capable of moral action, unlike chimpanzees, worms, and rocks—we are obligated to ensure that as much of this “value” exists in the universe as possible. This leads to a question: How exactly can we maximize value? As mentioned above, one way is to increase the total amount of pleasurable experiences each of us has. But utilitarianism points to another possibility: we could also increase the total number of people in the universe who have lives that, in the aggregate, create net positive amounts of value. In other words, the greater the absolute number of people experiencing pleasure, the better off our universe will be, morally speaking. We should therefore create as many of these “happy people” as we can. At the moment these people do not exist. Our ultimate moral duty is to bring them into being. Want a daily digest of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course. Underlying this idea is a very strange description of what people actually are—you or me. For typical utilitarians, people are nothing more than “receptacles” or “receptacles” of value. We matter only as means to an end, as objects that allow “value” to exist in the universe. People are containers of value and that’s all, as Bostrom himself suggests in several papers he has written. For example, he describes people in his “Astronomical Waste” paper as mere “structures of value”, where “structures” can be understood as “containers”. In another article titled “Letter From Utopia,” Bostrom writes that by modifying our bodies and brains with technology, we can create a techno-utopian world full of endless pleasures, full of super-intelligent versions of ourselves that live forever in a our own construction paradise (no supernatural religion required!). Pretending to be a super-intelligent, immortal “afterlife” writing to modern human beings, Bostrom proclaims that “if I could share one second of my conscious life with you! joy, it is so great’ (emphasis mine). If you wanted to object to this point that you’re not just a “container for value,” you wouldn’t be alone. Many philosophers find this description of what humans are too alienating, impoverished and untenable. Humans—as I would say, along with many others—must be seen as ends in themselves that are valuable as such…