Sophisticated Idiots

M. Beatrice Fazi, Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics

Rowman & Littlefield International., 248pp, £90.00, ISBN 9781786606082

reviewed by Dominic Fox

‘The trouble with computers,’ as Tom Baker's Fourth Doctor once remarked, ‘is that they're very sophisticated idiots. They do exactly what you tell them at amazing speed’. As M. Beatrice Fazi's Contingent Computation argues, this is only the beginning of our troubles with computers. A computer is, notionally, deterministic in its operation: if you know exactly what state it is in, you can predict exactly what its next state will be. But knowing what its state will be a million operations further down the line is often next to impossible; and this is ultimately due neither to the material fallibility of computer hardware, nor the simple incompetence of those who provide it with its instructions.

When we protest that not everything is computable, we usually mean that not everything is discretisable, amenable to being treated according to a formal and deterministic procedure: the real world is full of fuzzy boundaries and unpredictable eventualities, which a computational model must simplify and tidy away, traducing reality in the process of representing it. The novelty of Fazi’s argument lies in its mobilisation of non-computability as intrinsic, in a certain way, to the formal concept of computation itself. The contingency in ‘contingent computation’ is not an interruption of the ideal workings of the automaton by an unruly empirical reality, but ‘the presence, in every algorithmic procedure, of the incomputable’, a presence which takes the form of a Whiteheadian ‘prehension’ of infinity. A computational process is, in Fazi’s formulation, an ‘occasion’ in which the ‘propagative and generative character of the computational method’ is actualised, and this actualisation weaves together the deterministic and the indeterminate, the finitude of mechanism and the infinitude of ‘incomputable quantities’.

This is a promising formulation, but the devil is in the details. When we talk about computability in the terms set out by Alan Turing, we are talking about an idealised theoretical model of what a computer does: the Universal Turing Machine, a device which reads and writes symbols from a tape, moving from symbol to symbol, up and down the tape, in a series of discrete steps. No actual computer works like this, but the operations of every actual computer architecture could in theory be perfectly simulated by such a device: it is thus a useful imaginary machine to speculate with. The Turing machine has one important property that no actual computer has: its tape is infinitely long. A program that executes forever, repeatedly writing a symbol to tape and then advancing to the next position, will never run out of road. Real computers work with limited resources: time, storage space, electricity. Turing’s model enables us to think about the intrinsic limits of computation with infinite resources.

Fazi advances the claim, as an outcome of Turing’s famous formulation of the ‘halting problem’, that ‘we just do not know. . . if a program will halt until we run that program’. This is not altogether correct. The halting problem entails that there is no function computable by a Universal Turing Machine that can determine the ‘halting property’ of every program that such a machine can run. This does not mean that it is never possible to reason about what a given program will do. If I instructed you to begin with some arbitrarily large whole number and count down until you reached zero, you would not need to carry out my instructions to know that, eventually, you would come to the end, no matter what number you started with. Conversely, all the joy of the classic BASIC program

10 PRINT “HELLO WORLD”
20 GOTO 10

lies in knowing that, unless actively interrupted, it will never cease greeting the cosmos. Computer science abounds with proofs that a given algorithm will not only produce a correct final output for all valid inputs, but make use of bounded resources in doing so. But the halting problem (and its corollary, Rice’s Theorem) mean that we cannot write a program that can determine, for the set of all possible programs, which programs effectively implement such an algorithm, and which do not (although we can typically definitively identify some programs which do, and some which do not).

Because Fazi mistakes the impossibility of globally determining the halting property of all programs for the global impossibility of determining the halting property of any program, she ends up making some very strange assertions about the ‘indeterminacy’ supposedly woven into computation. For example:

‘It is because of incomputability that, to this day, there does not exist a 100 percent bug-free computer program. The correctness of a piece of software cannot be finally determined because of the inherent limits of algorithmic methods of testing, which check a computer program by executing the tested code in a finite-step, line-by-line manner. The infinite possibilities of endless loops in a program are beyond the finiteness of the time-constrained procedure constructed to infer its fallaciousness.’

All actually-existing software runs on machines without infinite tape, whose space of possible states, while too vast to enumerate mechanically within the available lifespan of the universe, is accordingly finite; this in turn means that their halting behaviour is theoretically, if not practically, decidable. Suppose we have a program with a manageably small number of possible states it can be in, such as the BASIC program above. The simplest procedure to determine the halting behaviour of this program is to check for cycles in the graph of its state transitions – a procedure for which there exists an algorithm that is itself guaranteed to terminate. Users of spreadsheet software will be familiar with a version of this algorithm, which ensures that the value of any cell can always be calculated by prohibiting cycles of references between cells. It is entirely possible to detect such cycles without looping endlessly around them.

The besetting flaw of automated tests that do exercise software by exercising parts of it stepwise under different conditions is not that they can’t break out of infinite loops, but that they typically only traverse a tiny subset of the possible paths that execution can take. The tester hopes that the scenarios examined are representative, but weird edge cases abound. This, however, is not a matter of indeterminacy or the ‘ingression of the infinite’, but of the strictly finite power of combinatorial explosion. We struggle to constrain programs to the path of correct behaviour, in the midst of an uncharted ocean of exploitable aberrancies.

I am afraid that Fazi has somewhat mishandled the central, axiomatic question around which the rest of her argument turns, and this is a pity because both the framing of the argument – in terms of the actuality of computation, and the role of generative uncertainty in both its empirical and formal-axiomatic character – and her navigation of the surrounding philosophical terrain are interesting and adroit. I think she is right that the formal, axiomatic realm casts a kind of shadow on the empirical, such that it makes sense to think of real computational processes as pervasively affected by a non-empirical ‘contingency’ (or formal undecidability), and the Whiteheadian language she uses to describe this is persuasive. Not for the first time, I wonder whether the underlying problem is one of disciplinary segregation. Do media philosophers ever really talk to mathematicians and computer scientists?

A mathematician might have warned Fazi that it was a mistake to describe the digits of the decimal expansions of real numbers as ‘uncountable’, when they are, significantly for the question of their computability, countably infinite. A computer scientist might have introduced Fazi to the field of formal verification of software, and supplied her argument with powerful tools for exploring the boundary between the formally knowable and the formally unknowable. Either might have encouraged her to take these matters as seriously as she has taken the finer points of Deleuze’s account of transcendental sensibility, or Whitehead’s notions of prehension, ingression and actualisation. The technical details matter here not merely because haughty specialists in these fields will cavil and nitpick if you get them wrong, but because they are, as Fazi herself acknowledges, the levers which control the possibilities of cogent philosophical argument on the subject of computation – a subject which plainly matters to us all, as automation continues to digest the human world.
Dominic Fox is a writer and programmer living and working in London. He is the author of Cold World and blogs at www.thelastinstance.com.