One-Dimensional Universe

George Dyson, Turing's Cathedral: The Origins of the Digital Universe

Allen Lane, 432pp, £25.00, ISBN 9780713997507

reviewed by Robert Barry

In 1983, the American film WarGames ended with a computer called Joshua suffering a nervous breakdown when it realised that thermonuclear war, like tic-tac-toe, is a game you cannot win. The whole thing comes about because of a conversation held about an hour and a half earlier between Joshua, voice synthed and hooked up to a stereo hi-fi system in a suburban bedroom, and a teenage computer hacker named David Lightman (played by Matthew Broderick).

‘Shall we play a game?’ asks the computer.
‘Love to,’ Lightman replies. ‘How about Global Thermonuclear War?’
‘Wouldn't you prefer a good game of chess?’
‘Later, let's play Global Thermonuclear War.’

A similar conversation must have taken place at some level fifty years earlier at the Princeton Institute of Advanced Studies when Joshua's oldest direct ancestor, the very first digital computer with its own stored programmes and light-speed random access memory, was set to work on its first task: calculating thermonuclear explosions. It was not until 1956, four years after the computer's sums saw fruit with the first H-bomb test at Enewetak Atoll, that one of the bomb's principal architects, Stanislaw Ulam, created a chess program for the Mathematical and Numerical Integrator and Computer (aka, the MANIAC).

Long before the first arcade shoot-em-up, the history of computing was marked through and through by games and game-play. Before founding the Electronic Computer Project at the Institute of Advanced Study, mathematician John von Neumann had co-authored, with Oskar Morgenstern, a volume called Theory of Games and Economic Behavior (Princeton University Press, 1944), which would provide the foundation for the ‘game theory’ soon to dominate nuclear strategy (and, later, economic policy). It was Ulam's fondness for playing solitaire - and von Neumann's for the roulette wheel - that led the pair to develop a class of algorithms for solving otherwise intractable statistical problems using computer simulations, which they christened ‘the Monte Carlo method’. But even more than gaming, the history of the computer is marked by the shadow of thermonuclear war.

George Dyson was born in 1953 - just over six months after the first hydrogen bomb test - the son of theoretical physicist Freeman Dyson, who had been living and working at the IAS since 1948. So, in a way, it is quite natural that he should present the story of the Princeton computer project as something half-way between Biblical fable and light opera; these events were, after all, the backdrop to his childhood reverie. ‘In the beginning was the command line,’ he opens, before presenting a cast of ‘principal characters’ as if coming to the aid of some hypothetical future casting director. From there on in, Dyson oscillates between the messianic tone of the former and a series of cheerfully detailed biographical sketches on everyone and everything from the land on which the Institute stood to the lady who drew up the lunch menus in the canteen.

Turing's Cathedral might be read as an extended gloss on a book written almost half a century earlier. In 1966, Hannes Alfvén, the only physicist (Dyson informs us) to quit Los Alamos when it was discovered that the Axis powers were not seriously pursuing the atomic bomb, published a short dystopian science fiction novel called The Great Computer. The narrative unfolds like a wistful far-future fable, in which the appearance of the first computer is reminiscent of the birth of Christ - in an old stable, attended by ‘wise men’ and the twinkling of stars. Alfvén pictures this idyllic scene against the backdrop of the mushroom cloud, for, as Dyson notes, ‘The digital universe and the hydrogen bomb were brought into existence at the same time.’ The story ends with a world dominated by supercomputers in which man has become superfluous. Aside from its considerably shorter length, the main difference between Alfvén's book and George Dyson’s is that the latter is not a work of fiction.

It is hard to say whether the ‘first computer’ Alfvén had in mind was the Princeton machine or its twin at Los Alamos, or some other - perhaps earlier - invention. But for Dyson, the MANIAC was the first true ‘Universal Turing Machine’. While devices such as the Colossus at Bletchley Park or the Moore School's ENIAC were built to perform specific functions, the MANIAC was built that it might theoretically perform any task that any computer might perform. This was the opening of a digital universe. In the mid-1950s, the Norwegian mathematician, Nils Barricelli, began to seed this universe with life - self-reproducing binary species, modelling a symbiotic evolution in the one-dimensional ticker-tape world of the Princeton computer.

In 1964, as Barricelli was claiming the first stirrings of intelligence in the virtual world of his mathematical simulations, Herbert Marcuse diagnosed a regime of ‘one-dimensional thought’ which was paralysing modern civilisation. One-dimensional society, Marcuse argued, flattens out contradictions, such that no apparent discrepancy might be perceived in an advertisement for a ‘luxury’ atomic bomb shelter. Meanwhile, a glib ‘happy consciousness’ prevails at places like the RAND Corporation where nuclear war is a parlour game played by top strategists in the building's lower basement.

In the same year, 1964, Paul Baran published a paper called ‘On Distributed Communications’ at the RAND Corporation, the organisation most concerned with the promotion of both pre-emptive nuclear war and precautionary civil defence. Baran's paper laid out the basic principles of packet-switching, the division of a given message into small ‘message-blocks’ to be transmitted simultaneously along multiple routes of a distributed network. The purpose of Baran's paper was to suggest a telecommunications system that would survive a nuclear attack. From the mid-sixties, the nuclear strategists and systems analysts from RAND were moving into government, bringing to bear their computer simulations and Monte Carlo method in the design of Lyndon Johnson's ‘Great Society’. While at the Defence Department research institute, DARPA, Baran's packet-switching was becoming the backbone of what would soon be renamed the Internet.

The anticipated apocalypse never came, but today we act almost as though it did anyway. Increasingly, we live, work, and communicate in the one-dimensional world developed at Princeton and Los Alamos sixty years ago. ‘The entire digital universe,’ writes Dyson, ‘from an iPhone to the Internet, can be viewed as an attempt to maintain everything, from the point of view of the order codes, exactly as it was when they first came into existence’ in the blinding flash of an artificial sun in the North Pacific. And the point of view of the codes, those simple strings of self-replicating numbers with which Barricelli first populated the digital universe, is increasingly our own point of view.

Marshall McLuhan used to talk about human beings as the ‘sex organs of the machine world’. Thanks to the self-reproducing automata beloved of Barricelli and von Neumann, even this humble role is no longer required of us and we become, as in Alfvén's tale, mere ‘parasites’ on the computers. ‘Evolution in the digital universe,’ Dyson claims, ‘now drives evolution in our universe, rather than the other way around.’ Having created a star in Enewetak, the next step was to build a cosmos and inhabit it.
Robert Barry is a senior editor at Review 31.