(This is the text of a talk I delivered at the Next Frontiers Applied Fiction Day in Stuttgart on Friday November 10th, 2023. Note: early draft, contains some typos, I'll fix them next week when I get home.)
In 2021, writer and game designer Alex Blechman inadvertently created a meme:
Sci-Fi Author: "In my book I invented the Torment Nexus as a cautionary tale."
Tech Company: "At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus!"
Hi. I'm Charlie Stross, and I tell lies for money. That is, I'm a science fiction writer: I have about thirty novels in print, translated into a dozen languages, I've won a few awards, and I've been around long enough that my wikipedia page is a mess of mangled edits.
And rather than giving the usual cheerleader talk making predictions about technology and society, I'd like to explain why I—and other SF authors—are terrible guides to the future. Which wouldn't matter, except a whole bunch of billionaires are in the headlines right now because they pay too much attention to people like me. Because we invented the Torment Nexus as a cautionary tale and they took it at face value and decided to implement it for real.
Obviously, I'm talking about Elon Musk. (He named SpaceX's drone ships after Iain M. Banks spaceships, thereby proving that irony is dead). But he's not the only one. There's Peter Thiel (who funds research into artificial intelligence, life extension, and seasteading. when he's not getting blood transfusions from 18 year olds in hope of living forever). Marc Andreesen of Venture Capitalists Andreesen Horowitz recently published a self-proclaimed "techno-optimist manifesto" promoting the bizarre accelerationist philosophy of Nick Land, among other weirdos, and hyping the current grifter's fantasy of large language models as "artificial intelligence". Jeff Bezos, founder of Amazon, is another. He's another space colonization enthusiast like Elon Musk, but while Musk wants to homestead Mars, Bezos is a fan of Gerard K. O'Neill's 1970s plan to build giant orbital habitat cylinders at the Earth-Moon L5 libration point. And no tour of the idiocracy is complete without mentioning Mark Zuckerberg, billionaire CEO of Facebook, who blew through ten billion dollars trying to create the Metaverse from Neal Stephenson's novel Snow Crash, only for it to turn out that his ambitious commercial virtual reality environment had no legs.
(That was a deliberate pun.)
It'd be amusing if these guys didn't have a combined net worth somewhere in the region of half a trillion euros and the desire to change the human universe, along with a load of unexamined prejudices and a bunch of half-baked politics they absorbed from the predominantly American SF stories they read in their teens. I grew up reading the same stuff but as I also write the modern version of the same stuff for a living I've spent a lot of time lifting up the rocks in the garden of SF to look at what's squirming underneath.
Science fiction influences everything this century, both our media and our physical environment. Media first: about 30% of the big budget movies coming out of the US film industry these days are science fiction or fantasy blockbusters, a massive shift since the 1970s. Computer games are wall-to-wall fantasy and SF—probably a majority of the field, outside of sports and simulation games. (Written fiction is another matter, and SF/F combined amount to something in the range 5-10% of books sold. But reading novels is a minority recreation this century, having to compete with the other media I just named. The golden age of written fiction was roughly 1850 to 1950, give or take a few decades: I make my living in an ageing field, kind of like being a classical music composer or an 8-bit games programmer today.)
Meanwhile the influence of science fiction on our environment seems to have been gathering pace throughout my entire life. The future is a marketing tool. Back in the early 20th century it was anything associated with speed—recall the fad for streamlining everything from railway locomotives to toasters, or putting fins on cars. Since about 1970 it becme more tightly associated with communication and computers.
For an example of the latter trend: a decade or two ago there was a fad for cellular phones designed to resemble the original Star Trek communicator. The communicator was movie visual shorthand for "a military two-way radio, but make it impossibly small". But it turns out that enough people wanted an impossibly small clamshell telephone that once semiconductor and battery technology got good enough to make one, they made the Motorola Razr a runaway bestseller.
"Artificial intelligence" and "computer controlled" became marketing buzzwords decades ago. They're used to mis-sell cars described as "self-driving" and technologies like Tesla's so-called "autopilot". In reality, aircraft autopilots don't do what most people think they do (they require constant monitoring by pilots). And self-driving car software is dangerously insufficient to do the job, as witness the recent revelation that self-driving taxi firm Cruise—recently banned from San Fracisco after a pedestrian was dragged under one of their cars—requires constant human supervision. But as long as it sells cars to customers who think it means they can relax and watch a movie while they commute, why should Elon Musk care? Science fictional TV shows like "Knight Rider" in the 1980s primed those of us who grew up in the 1970s and 1980s to expect intelligent self-driving cars in the near future, and there has been a gold rush to sell self-driving cars, even though the technology isn't ready yet and has lethal failure modes. Because anything that tastes of the future is marketing gold.
It's becoming increasingly unusual to read a report of a new technology or scientific discovery that doesn't breathlessly use the phrase "it seems like science fiction". The news cycle is currently dominated by hype about artificial intelligence (a gross mis-characterisation of machine learning algorithms and large language models). A couple of years ago it was breathless hype about cryptocurrency and blockchain technologies—which turned out to be a financial services bubble that drained a lot of small investors' savings accounts into the pockets of people like convicted fraudster Sam Bankman-Fried.
It's also driving politics and law. Recently in the UK, Elon Musk paid a visit to Prime Minister Rishi Sunak. Last week we were given a preview of the government's legislative program for the coming year, and guess what it contained? Yes: new laws to permit self-driving vehicles on the roads, and regulation of artificial intelligence. And while some degree of government monitoring and regulation of these sectors is welcome, the UK has much bigger problems right now—and I'd rather the laws weren't drafted by an Elon Musk fanboy.
Now I've shouted as passing clouds for a bit—or dangerous marketing fads based on popular entertainment of decades past—I'd like to talk about something that I personally find much more worrying: a political ideology common among silicon valley billionaires of a certain age—known by the acronym TESCREAL—that is built on top of a shaky set of assumptions about the future of humanity. It comes straight out of an uncritical reading of the bad science fiction of decades past, and it's really dangerous.
TESCREAL stands for "transhumanism, extropianism, singularitarianism, cosmism, rationalism (in a very specific context), Effective Altruism, and longtermism." It was identified by Timnit Gebru, former technical co-lead of the Ethical Artificial Intelligence Team at Google and founder of the Distributed Artificial Intelligence Research Institute (DAIR), and Émile Torres, a philosopher specialising in existential threats to humanity. These are separate but overlapping beliefs that are particularly common in the social and academic circles associated with big tech in California. Prominent advocates on the transhumanist and AI side include Ray Kurzweil, a notable technology evangelist and AI researcher at Google, philosophers Nick Bostrom and Eliezer Yudkowsky, and going back a long way earlier, Russian rocket scientist Konstantin Tsiolkovsky, whose writings brought Russian Cosmism to America. Sam Bankman-Fried is an outspoken advocate of Effective Altruism, another element of this overlapping web of beliefs. Elon Musk and Jeff Bezos, as noted, both seem to be heavily influenced by Tsiolkovsky's advocacy of space colonization. Musk's Neuralink venture, attempting to pioneer human brain-computer interfaces, seems intent on making mind uploading workable, which in turn points to the influences of Kurzweil and other singularitarians. And hiding behind these 20th and early 21st century thinkers are older influences—notably the theological speculation of 19th century Russian Orthodox priest Nikolai Fedorovich Fedorov.
How did this ideology come about, and why do I think it's dangerous?
(Longtermism is the belief that we should discount short-term harms to real existing human beings—such as human-induced climate change—if it brings us closer to the goal of colonizing the universe, because the needs of trillions of future people who don't actually exist yet obviously outweigh the needs of today's global poor. If you accept that it's our destiny as a species to take over the cosmos, then it follows that longtermist entrepreneurs are perfectly justified in moving fast, breaking things, and ruthlessly maximizing profit extraction, as long as they spend their wealth on colonizing Mars. Which is just the first step on the road to conquering the galaxy and a bunch of other stuff like mind uploading, becoming immortal, creating artificial intelligences to do all the tedious work, resurrecting the dead, and taking over the universe. It posits a destiny for humanity, which of necessity makes it a secular religion. It means that if you don't believe in their plans, then you're some kind of anti-science backsliding reactionary heretic. And if this sounds just slightly insane to you, well, that's probably because you're not Elon Musk or Peter Thiel.)
Speaking as a science fiction writer, I'd like to offer a heartfelt apology for my part in the silicon valley oligarchy's rise to power. And I'd like to examine the toxic role of science fiction in providing justifications for the craziness.
So, here's the thing: science fiction is fiction. And while we can dress it up in fancy clothes and declare that fiction is an artistic form for exploring the human condition, we're tip-toeing past the slaughterhouse with attached sausage factory—the industry that takes the raw material and puts it in front of us. As an editor once told me, "you can write anything you want, but we don't have to publish it." And without publishers, or some mechanism for replicating and advertising the existence of your text, you won't have any readers.
Publishers, incidentally, are not monolithic. They're hives of human activity where people working in different departments each do their bit to try and turn the product they're taking in at one end—raw book manuscripts are about as appetizing as a raw animal carcass, they take a lot of work to make them appealing—into saleable books or tasty-looking sausages. I'm not going to get into the minutiae of trade publishing or we'd be here for the rest of the year, but as an author, my job is to convince an editor to buy my book. The editor's job is then to convince the marketing department that this book is commercially viable. And the marketing department try to push it in the very specific media channels that bookshop staff read to decide what products to order in next month. So there's a long chain of whispers between the author and the reader, and because a book that doesn't sell will cost each intermediary money, and there are hundreds of books per month to choose between, it's easier for them to say "no" than to say "yes".
I'm focussing here on a very specific channel, namely novels that are written and sold via traditional big publishing companies. Different constraints apply to different formats and different sales channels -- say, short fiction or web serials, sold via anthologies or self-published direct to Kindle or other ebook storefronts. But there's almost always a middle-man, even if you're self-publishing (the middle-man in this case is Patreon or Ko-Fi or Amazon an ad exchange somewhere: it's whoever processes payments for you). The only way to completely avoid middle-men is to give your work away for free.
The same is true of other media, such as film, TV, music, and games. If you refuse to compromise with your audience's expectations they will put the book down, flip channel, or leave a one-star review on Steam.
So I exist in a symbiotic relationship with my readers. They keep buying my books as long as they remain enjoyable. And my publishers keep publishing my books as long as the readers keep buying them. So like other SF writers I've got a financial incentive to write books that readers find enjoyable, and that usually means conforming to their pre-existing biases. Which are rooted in the ideas they absorbed previously. Science fiction as a genre has inertia, and it's hard to get new ideas to stick if they force the readers out of their comfort zone.
The science fiction genre that today's billionaires grew up with—the genre of the 1970s—has a history going back to an American inventor and publisher called Hugo Gernsback. Gernsback founded the first magazine about electronics and radio in the United States, Modern Electrics, in 1908, but today he's best remembered as the founder of the pulp science fiction magazine Amazing Stories in 1926.
The early 1908 issues of Modern Electrics would be instantly recognizable to a teenage personal computing enthusiast of the 1970s and early 1980s—the same generation as the tech billionaires this talk is really about. The first two decades of the 20th century saw a huge explosion of interest in the field of wireless—radio broadcasting as we know it today, but also amateur radio. Radio sets back then were hand-built and repaired by local enthusiasts, much like many early personal computers. Gernsback founded Modern Electrics to carry adverts for radio components and to promote the amateur radio hobby. He curated a directory of amateur radio users and their call signs and equipment, published articles about building and operating your own wireless set, and editorialized about the future of radio. Amateur radio grew explosively in the nineteen-teens, and just like computer hobbyists half a century later, many of the radio hobbyists ended up working in the industry.
Gernsback began to publish general articles about science and technology, then fiction with a focus on the science—including some of his own stories—culminating in starting the magazine Amazing Stories as a vehicle for fantastic tales about a technological future. And as a runaway commercial success, Amazing Stories spawned imitators and, eventually, an industry.
(We can skip over the details of how SF publishing developed from the earnest technophiliac visions of Gernsback to the two-fisted planetary romances of the pulp magazines in the 1920s, survived the collapse of the pulp magazine distribution network in the 1950s and migrated to paperback novels sold in wire racks in supermarkets, then colonized the heights of the publishing industry bestseller lists from the 1960s onwards.)
American SF was bootstrapped by a publisher feeding an engineering subculture with adverts for tools and components. There was an implicit ideology attached to this strain of science fiction right from the outset: the American Dream of capitalist success, mashed up with progress through modern technology, and a side-order of frontier colonialism. It's not a coincidence that the boom in planetary romances occured shortly after the American frontier was finally closed: the high frontier had a natural appeal and gradually replaced the western frontier in the popular imagination.
(As futurist and SF author Karl Schroeder remarks, every technology has political implications. If you have automobiles you will inevitably find out that you need speed limits, drunk driving laws, vehicle and driver licensing to ensure the cars and their drivers are safe ... and then jaywalking laws, the systematic segregation of pedestrians and non-automotive traffic from formerly public spaces, air pollution, and an ongoing level of deaths and injuries comparable to a small war. You also get diversion of infrastructure spending from railways to road building, and effective limits on civil participation by non-drivers.
The new radio enthusiast magazine readers Gernsback was cultivating didn't ask about the politics of radio, although it would come back to bite them in the 1930s with increased regulation, then state censorship and the use of wireless broadcasts for wartime propaganda. They were just having fun and maybe trying to build a local radio repair shop. But there's been a tendency in American SF, ever since those early days, to be wilfully blind to the political implications of the shiny toys.)
There is a darker element to this era of science fiction. Gernsback's publishing empire arose around the time the Italian poet Filippo Tommaso Marinetti published his Manifesto of Futurism (in 1909). Futurism was an explicitly ideological program—an artistic movement that rejected of the past and celebrated speed, machinery, violence, youth and industry, and argued for the modernization and cultural rejuvenation of the Italian state. In 1918 Marinetti founded a Futurist Party, but a year later it merged with Benito Mussolini's movement, and Marinetti is credited as the co-author of the Fascist Manifesto of 1919.
Hugo Gernsback didn't consciously bring fascism into American SF, but the field was open to it by the 1930s. Possibly the most prominent contributor to far right thought in American science fiction was the editor John W. Campbell. Campbell edited Astouding Science Fiction, one of Amazing Stories rivals, from 1937 until 1971. (Astounding is still with us today, having changed its name to Analog in 1960.) Campbell discovered or promoted many now-famous authors, including Robert Heinlein, Isaac Asimov, E. E. Smith, and Jack Williamson. But Campbell was also an anti-communist red-baiter. He was overtly racist, an anti-feminist, and left his imprint on the genre as much by what he didn't publish as by what he did—and how he edited it. For example, Tom Godwin's classic short story The Cold Equations was sent back with editorial change requests three times before Godwin finally gave Campbell the ending he wanted: one that, as Cory Doctorow put it, turned the story "into a parable about the foolishness of women and the role of men in guiding them to accept the cold, hard facts of life".
Later in his career, Campbell fell victim to just about every pseudoscientific grift that was going. (If he was alive today he'd probably be selling NFTs.) He had a weakness for perpetual motion machines, was an enthusiast for Dianetics (which L. Ron Hubbard later turned into the Church of Scientology), and he was a firm believer in paranormal powers -- telepathy, telekinesis, and astral projection, (all now thoroughly disproven by research at the Koestler Institute of Parapsychology).
(Confirmation bias may have been at work here: a belief in psi powers implicitly supports an ideology of racial supremacy, and indeed, that's about the only explanation I can see for Campbell's publication of the weirder stories of A. E. Van Vogt.)
Campbell wasn't the only wellspring of right-wing thought in golden age SF. No quick tour would be complete without mentioning Ayn Rand, the Russian emigre and bestselling author who invented the far right philosophy of Objectivism. This centred (quote) "the concept of man as a heroic being, with his own happiness as the moral purpose of his life, with productive achievement as his noblest activity, and reason as his only absolute". Reason which, of course, was positioned as emotionless, neutral, factually grounded, and thereby exempt from accusations of bias and subjectivity. Rand held that the only social system compatible with this obviously-correct philosophy was laissez-faire capitalism: you can probably see why this appeals to sociopathic billionaires and their fans.
Perhaps the weirdest ingredient in the mix of ideas that gave rise to what became known in the 1990s as the Californian Ideology is Russian Cosmism, the post-1917 stepchild of the mystical theological speculation of a Russian Orthodox theologian, Nikolai Fyodorovitch Fyodorov.
The Internet Encyclopaedia of Philosophy is your one-stop shop for batshit philosophers who unduly influenced the space program and gave rise to modern Transhumanism. As it notes: "Nikolai Fedorovich Fedorov (born 1829, died 1903), was founder of an immortalist (anti-death) philosophy emphasizing "the common task" of resurrecting the dead through scientific means."
The illegitimate son of a Russian prince, Federov grew up a devout Russian Orthodox Christian. He worked as a librarian and as a teacher, and through his writings he was the formative influence on the Russian cosmists, a Russian philosophical movement that prefigured transhumanism (and specifically extropianism). The cosmists in turn influenced Tsiolkovsky, who was a major inspiration for Soviet attitudes to space exploration.
"Fedorov found the widespread lack of love among people appalling. He divided these non-loving relations into two kinds. One is alienation among people: 'non-kindred relations of people among themselves.' The other is isolation of the living from the dead: 'nature's non-kindred relation to men.'" ... "A citizen, a comrade, or a team-member can be replaced by another. However a person loved, one's kin, is irreplaceable. Moreover, memory of one's dead kin is not the same as the real person. Pride in one's forefathers is a vice, a form of egotism. On the other hand, love of one's forefathers means sadness in their death, requiring the literal raising of the dead."
Federov believed in a teleological explanation for evolution, that mankind was on a path to perfectibility determined by god: human mortality was the biggest sign of our imperfection. He argued that the struggle against death would give all humanity a common enemy -- and a victory condition that could be established, in the shape of (a) achieving immortality for all, and (b) resurrecting the dead to share in that immortality. Quite obviously immortality and resurrection for all would lead to an overcrowded world, so Federov also advocated colonisation of the oceans and space: indeed, part of the holy mission would inevitably be to bring life (and immortal human life at that) to the entire cosmos.
(The wikipedia article on Federov discusses his transhumanist program in somewhat more detail than the IEP entry.)
The final word probably deserves to go to Nicholas Berdyaev (secondary source here) who in 1928 wrote, in a collection of liturgical essays on the Orthodox church:
The novelty of Fedorov's idea, one which frightens so many people, lies in the fact that it affirms an activity of man incommensurably greater than any that humanism and progressivism believe in. Resurrection is an act not only of God's grace but also of human activity. We now come to the most grandiose and bewildering idea of N. Fedorov. He had a completely original and unprecedented attitude towards apocalyptic prophecies, and his doctrine represents a totally new phenomenon in Russian consciousness and Russian apocalyptic expectation. Never before in the Christian world had there been expressed such an audacious, such an astounding concept, concerning the possibility of avoiding the Last Judgement and its irrevocable consequences, by dint of the active participation of man. If what Fedorov calls for is achieved, then there will be no end to the world. Mankind, with a transformed and definitively regulated nature, will move directly into the life eternal.
I'm going to confess, at this point, to having in my youth read translations of Tsiolkovsy's writing, but not Federov—he was relatively obscure in the west until recently. The forebears of the American space program—Robert Goddard, Jack Parsons, and of course Wernher Von Braun—also read Tsiolkovsky. And through their writings, his plans for space colonization (and the ideas of Russian cosmism) leaked directly into the minds of science fiction authors like Robert Heinlein, Hal Clement, and Arthur C. Clarke.
Finally, I haven't really described Rationalism. It's a rather weird internet mediated cult that has congealed around philosopher of AI Eliezer Yudkowski over the past decade or so. Yudkowski has taken on board the idea of the AI Singularity—that we will achieve human-equivalent intelligence in a can, and it will rapidly bootstrap itself to stratospheric heights of competence and render us obsolete—and terrified himself with visions of paperclip maximizers, AIs programmed to turn the entire universe into paperclips (or something equally inhospitable to human life) with maximum efficiency. He and his followers then dived into a philosophical rabbit maze of trying to reason their way into minimizing harms arising from a technology that does not yet exist and may not even be possible. (In contrast, Nick Bostrom focussed on the philosophical implications of digitizing human brains so we can all be raptured up to live in the great cloud computer in the sky, a very modern riff on the Christian eschatological theory of resurrection.)
American SF from the 1950s to the 1990s contains all the raw ingredients of what has been identified as the Californian ideology (evangelized through the de-facto house magazine, WIRED). It's rooted in uncritical technological boosterism and the desire to get rich quick. Libertarianism and it's even more obnoxious sibling Objectivism provide a fig-leaf of philosophical legitimacy for cutting social programs and advocating the most ruthless variety of dog-eat-dog politics. Longtermism advocates overlooking the homeless person on the sidewalk in front of you in favour of maximizing good outcomes from charitable giving in the far future. And it gels neatly with the Extropian and Transhumanist agendas of colonizing space, achieving immortality, abolishing death, and bringing about the resurrection (without reference to god). These are all far more fun to contemplate than near-term environmental collapse and starving poor people. Finally, there's accelerationism: the right wing's version of Trotskyism, the idea that we need to bring on a cultural crisis as fast as possible in order to tear down the old and build a new post-apocalyptic future. (Tommasso Marinetti and Nick Land are separated by a century and a paradigm shift in the definition of technological progress they're obsessed with, but hold the existing world in a similar degree of contempt.)
The hype and boosterism of the AI marketers collided with the Rationalist obsession in the public perception a couple of weeks ago, in the Artificial Intelligence Safety Summit at Bletchley Park. This conference hatched the Bletchley Declaration, calling for international co-operation to manage the challenges and risks of artificial intelligence. It featured Elon Musk being interviewed by Rishi Sunak on stage, and was attended by Kamala Harris, vice-president of the United States, among other leading politicians. And the whole panicky agenda seems to be driven by an agenda that has emerged from science fiction stories written by popular entertainers like me, writers trying to earn a living.
Anyway, for what my opinion is worth: I think this is bullshit. There are very rich people trying to manipulate investment markets into giving them even more money, using shadow puppets they dreamed up on the basis of half-remembered fictions they read in their teens. They are inadvertently driving state-level policy making on subjects like privacy protection, data mining, face recognition, and generative language models, on the basis of assumptions about how society should be organized that are frankly misguided and crankish, because there's no crank like a writer idly dreaming up fun thought experiments in fictional form. They're building space programs—one of them is up front about wanting to colonize Mars, and he was briefly the world's richest man, so we ought to take him as seriously as he deserves—and throwing medical resources at their own personal immortality rather than, say, a wide-spectrum sterilizing vaccine against COVID19. Meanwhile our public infrastructure is rotting, national assets are being sold off and looted by private equity companies, their social networks are spreading hatred and lies in order to farm advertising clicks, and other billionaires are using those networks to either buy political clout or suck up ever more money from the savings of the poor.
Did you ever wonder why the 21st century feels like we're living in a bad cyberpunk novel from the 1980s?
It's because these guys read those cyberpunk novels and mistook a dystopia for a road map. They're rich enough to bend reality to reflect their desires. But we're not futurists, we're entertainers! We like to spin yarns about the Torment Nexus because it's a cool setting for a noir detective story, not because we think Mark Zuckerberg or Andreesen Horowitz should actually pump several billion dollars into creating it. And that's why I think you should always be wary of SF writers bearing ideas.