On both sides of the aisle, people ask me, “how do you reconcile your beliefs on transhumanism with your reactionary views?” This is a great question, so I’ll explain here. Caution: this explanation is long and complicated, which is why I’ve put it off until now.
The first point is that Transhumanism is inevitable. Which is to say, widespread human enhancement is going to occur, unless a global totalitarian Luddite regime or total thermonuclear war stops it. This means that any philosophy that plans to have an impact on the future must implicitly acknowledge Transhumanism, or become irrelevant. People can put on a frowny face about it if they like, but in the end they’ll be steamrolled by it.
I could spend 300 pages arguing this point and still not satisfy skeptics, but I want to say at least something. Over the long run, cultures with superior military technology, reproduction, and economic growth tend to replace other cultures. (Neanderthals, anyone?) Transhumanist technologies such as soldier enhancement technologies, artificial wombs, and molecular manufacturing have the potential to supercharge all these metrics by orders of magnitude. Very large power gaps could potentially be produced on historically minute timescales.
Molecular manufacturing means 3D printers that construct things atom by atom, with quintillions of tiny molecular assemblers. There are three main reasons this is a big deal. The first is that atomic precision would allow the mass production of ultra-strong and ultra-light materials such as fullerenes. Think mansions made out of pure diamond. It would also allow the production of motors and batteries with extremely high energy densities. For this, imagine a tank with motors as powerful as an aircraft carrier. The last reason it matters is because of scaling laws — many tiny assemblers means a large percentage of the total mass of the nanofactory is devoted to manufacturing the product. Preliminary estimates suggest a nanofactory would be able to output its own weight in product as few as 15 hours. Imagine buildings that grow faster than bamboo, something like ten feet per day.
If a nation with nanofactory capabilities engaged in a military conflict with a nation without them, there would be no contest. Any nation that does not adopt this technology will not have the tools to be a player on the global stage.
Molecular manufacturing is intimately connected to Transhumanism because only this technology would be able to produce artifacts of sufficient performance that people would want to enhance themselves with them. More primitive technologies would not produce prosthetics advanced enough to justify replacing our flesh with cybernetic parts. With molecular manufacturing, however, many amazing forms of enhancement would become possible, and desirable. Running at 60 mph, breathing underwater, stopping bullets with our teeth, scaling walls, living on grass — the works.
It’s highly uncertain when molecular manufacturing will be developed, but there are very strong arguments for its general feasibility. The most obvious is that life itself does molecular manufacturing all the time, in the form of protein synthesis. You yourself are made up of complex molecular machinery. The estimates for when it will be developed are all over the board, from Ray Kurzweil’s wildly optimistic estimate of the 2020s, to Ramez Naam’s conservative estimate of post-2100. As for myself, I’ll pick a probability distribution arbitrarily centered around 2060, with a standard deviation of 15 years.
Molecular Manufacturing Will Be Necessary for Nations to Compete,
and Necessarily Leads to Transhumanism
I mentioned that any nation that does not embrace this technology will be doomed to irrelevance. Any nation that does embrace this technology gets on the fast track to widespread human enhancement. It is possible to imagine a country that restricts enhancement for the use of soldiers alone, but I doubt this restriction would hold unless they were able to conquer the planet and put down economic threats. Otherwise, some other country would develop the technology and pump out enhanced civilians who are hundreds of times more economically productive than unenhanced civilians, quickly providing them with a huge advantage.
To give a taste of what I mean, imagine people with brain-computer interfaces that allow them to mentally control thousands of machines at once, people who never need to sleep, people who have enough mental energy to perform difficult tasks 24/7/365, and so on. All of this would become possible with the advanced products of molecular manufacturing.
Those fearful of the new technology can make compelling ethical arguments to restrict it if they like, but if a nation chooses not to adopt these technologies, while others do, they will be defeated economically and militarily. A worldwide consensus banning human enhancement seems possible in the short run, but in the longer run, it would be like a worldwide ban on electricity — not really enforceable. We are seeing the first stirrings of this dynamic by witnessing 3D printers that print guns and the feeble attempts of the State Department to restrict them.
My point is that Transhumanism is not a choice. It’s inevitable. Molecular assemblers will be built, and human enhancement will flow directly from them. For any philosophy to survive in the long run, Reaction included, it must take into account these realities.
Neoreaction and Molecular Manufacturing
Long and hard thought about the consequences of Transhumanism, combined with gentle reactionary nudging by Mencius Moldbug, are what finally caused me to whole heartedly embrace Neoreaction. Neoreaction is essentially an endorsement of Traditional principles, and a rejection of Progressive principles. Considering the likely long-run consequences of unrestrained, worldwide molecular manufacturing, I was horrified by how many ways this story could go wrong. Untraceable killer cybernetic mosquitoes for anonymous assassinations. Mobsters with fullerene muscles a hundred times stronger than steel. Nuclear enrichment centrifuges you can build in your basement. Combined with a largely unrestrained, laissez-faire anarcho-capitalist or simply neoliberal capitalist system, we have a recipe for disaster. Only through embracing Traditional structures and patterns did I see a way out of this conundrum.
The reason that more Transhumanists are not Reactionaries, in my view, is that they haven’t done their reading on molecular manufacturing, or they mistakenly think that Friendly AI or a Kurzweilian Singularity will come around in time to save the day. The writings describing the full picture of molecular manufacturing are rather long and technical, and most people — even Harvard graduates with beefy IQs — simply don’t have the time or inclination to read them. The standards of Transhumanism have fallen in the last decade as well. In the late 90s and the early 00s, when the primary transhumanist venues were the Extropians and SL4 mailing lists, the technical understanding of the average transhumanist was excellent. Today, it is quite poor. There’s an emergent brilliance produced when you put Spike Jones, Robin Hanson, Anders Sandberg, Chris Phoenix, Eliezer Yudkowsky, and Robert Bradbury on the same mailing list, which simply has no modern-day analogue. This environment was the forge that crafted the most capable Transhumanist leaders of today. Second-generation students of transhumanism are simply not the same.
When people understand the true extent of the feasibility and power of molecular manufacturing, a grim attitude tends to set in due to all the palpable risks. I’m pleased that the 3D-printed gun exists, because this is the first visceral, public example of the phenomenon I’ve been writing about and fearing since 2001. Unrestrained technological power in the hands of the masses. It’s nearly impossible to grasp the full picture until you understand the likely production capabilities and relative technological feasibility of molecular manufacturing. Many of the original visionaries are beginning to get quite old, and are falling silent without passing on their knowledge in detail to many students, so I fear that the baton is not being handed off properly, and will be dropped along the way. Those who have the responsibility to pass off this knowledge know who they are.
Hierarchy as a Buffer Against Hyper-Empowered Masses
My concern are individuals and small groups that asymmetrically empower themselves through emerging technologies and don’t have the public good in mind. Given the current predominant political sympathies, which are ultra-egalitarian, there would be few restrictions on the routes to this power. Adopting Traditional principles, however, which are strictly hierarchical, would restrict the power in the hands of a few, providing fewer points of failure. Would you rather take the risk of a thousand elite leaders exploiting powerful manufacturing technologies to do damage, or the guarantee that if the technologies are available to a billion, many of them will certainly do damage?
The benefit of conferring responsibility on a comparatively small set of elite individuals is that these individuals can be educated for their responsibilities far in advance, groomed and cultivated for their important roles. They can be instilled with good morals, broad understanding, supportive familial and organizational structures, and mutual expectations worthy of their station. Common people tend to think only for themselves, and have trouble seeing personal responsibility for affairs of the state. Handing someone a nanofactory automatically gives them the power to influence affairs on a worldwide scale. Is this a power we really want being handed to those educated by reality television?
Students of political correctness will cringe at the thought of conferring superlative powers on an elite, but the long-term survival of the human species is more important than historically contingent factors that are based on nothing more than the preoccupations of an unexpectedly influential cadre of Berkeley students in the 1960s. Prior to the 1960s, high-level political thought was still based heavily on Traditional principles of sacred responsibility among a few men of power. The notion that true power and control should be shattered into 300 million little pieces and distributed evenly among the populace is a very recent idea, one we would do well without. If UC Berkeley never existed, progressivism may have never even manifested in its current form and risen to become the dominant ideology of the nation’s elite.
The key concept is that molecular manufacturing and transhumanism are guaranteed to highly empower someone. Some set of people will be highly empowered; preventing this isn’t an option. Fewer people, with a deeper sense of responsibility, coupled with moral and spiritual values, is a superior option to the alternative.
Speaking for myself personally, my key motivation is not having to witness or experience global nanowar. For a grasp of the capabilities that could be invoked during such a war, I recommend the obscure volume Military Nanotechnology: Potential Applications and Preventive Arms Control. I consider this slim treatise to be among the five most important books ever published, but it’s completely unknown outside a minuscule circle of academics. There are promising experts in emerging technologies, such as my colleague Patrick Lin at Cal Poly, or Brian Wang of the leading futurist blog Next Big Future, who I believe are aware of it, so it isn’t totally unknown.
An interesting theme of the book is how many aggressive arms control measures it proposes. A key proposal is restriction on combat robots smaller than 0.2 – 0.5 meters (approximately 8-20 inches) in size. Molecular manufacturing would enable combat robots the size of bacteria, but this author proposes a lower size limit of 20 inches? For such a treaty to be effectively enforced would require considerably more surveillance and top-down structure to exist in society than it does today. Sacrificing some degree of privacy to ban these robots would be well worth it; the combat capabilities of swarms of small robots would be so immense that they would nearly guarantee severe geopolitical instability.
It’s laborious for me to explain why small robots would be a major risk, because it should be self-evident. Very small robots could be made exceedingly stealthy, they could provide comprehensive surveillance of enemy activities, and could inject lethal payloads of just a few microliters. Moreover, they could self-detonate after carrying out their mission, making them untraceable. Imagine the leadership of North Korea having possession of fly-sized robots providing surveillance of the military headquarters of every nation on the planet. They could sell this information to the highest bidder, completely destroying military information security. The detailed blueprints for the most advanced nuclear weapons could be made common knowledge. Clearly something we want to avoid. Most proposed countermeasures, such as hermetically sealing off every important facility, are not practical. Only through restricting the “means of production” in the hands of a responsible few can we avoid the worst scenarios.
The appeal of essentially reinstituting an aristocracy to cope with the challenges of emerging technologies is that would confer personal responsibility onto individuals for state actions. Not the sad facsimile of personal responsibility we see among elected officials today, who transfer or retire after a four or eight-year term, but the genuine responsibility that comes with having your name attached to something for the long haul. When someone messes up in democratic governments, faceless bureaucrats all point their fingers at one another and an “investigation” is formed, the purpose of which is to find nothing. When someone messes up in a monarchical government, responsibility ultimately rests with either an official with a long tenure, who may be dismissed, or the monarch him or herself.
When someone’s personal reputation, their personal life, is threatened by the misconduct of their subordinates, and the whole system is designed for long-term stability, they tend to think twice before bending the rules. Even more effective is the system similar to monarchical Europe, where the elites were related by blood and more related to one another than to their subjects. This builds a sense of mutual respect, understanding, and camaraderie that today’s politicians can only blink in confusion about. Elites managing a government for the long term are incentivized to care about far futures, not just the next election cycle. Unencumbered by the frivolous winds of public opinion, they are free to consult advisors for the most intelligent decision, not necessarily the most popular one.
Another stability-inducing effect produced by putting responsibility in the hands of an elite is that wars are less likely to be fought for highly abstract, nationalistic reasons such as “promoting democracy in the world.” Rather, leaders have an incentive to fight wars over tangible assets, which tend to be limited, or not fight them at all. Total war tends not to occur among monarchies unless the conflict is based in religion, because elites are less susceptible to getting caught up in a blind nationalistic fervor that upholds the slogan, “fight to the death.” Nationalistic wars are a unique product of groupthink among masses of people. Even the notion of having a large, permanent standing army is a relatively recent idea.
I hope I’ve made a respectable attempt at conveying some of the forces that attracted me to reactionary thought in the context of highly advanced emerging technologies. I’ve only scratched the surface in this essay, but I think I’ve pointed in the direction of what I mean. If you’re interested in providing a response (on your own blog, of course, as comments are closed), I encourage you to discuss your ideas with me personally before responding, rather than jumping to conclusions about what I believe and responding based on emotion. Thank you for your time.