Biological machines and the materiality of computation

image
  • Workshop on Synthetic Biology: Views from the Future

  • Prof. Dr. Georg Trogemann
  • January 26th-27th, 2012
  • Bergen, Norway

We address the question, whether synthetic biology is capable to change the way we think about technical artifacts and of computation and information processing in particular. Our current concept of computation is based on our precise understanding of the notion of an algorithm, which mainly goes back to mathematical works of several people during the 1930s, e.g. Alan Turing, Alonzo Church, Kurt Gödel, and Emil Post. Meanwhile a vast number of notions of algorithm are available that all have turned out to be equivalent in their fundamental capabilities. Despite of considerable differences in formalism, they all share some common characteristics: algorithms are composed of a series of elementary steps, which are executed one by one; only a certain number of different types of steps is needed; each step as well as the selection of the next step are absolutely deterministic. From the theoretical point of view nothing else is necessary to construct the whole computational universe. For efficiency reasons a series of additional concepts have been developed to support the description of complex computational systems. Most important is the organization of series of elementary steps to new elementary units and the nesting of these units in hierarchies. From this perspective algorithms are a very general concept, applicable to any other operation cycle in craft, industry, or organizations, except that in the strict sense of an algorithm we demand a semiotic description in order to avoid ambiguities.


On the basis of formal notations algorithms are not only tools for different scientific disciplines but the overall contemporary principle of technological scientific thinking. From this general viewpoint it is not surprising that the new field of synthetic biology is also driven by algorithmic descriptions and associated computer simulations. Conversely, algorithmic methodology has always been influenced by principles that have been learned by watching nature, e.g. evolutionary computing, artificial neural networks, swarming behavior, L-Systems. Although there have always been productive connections between computer science and the science of biology, the targeted dynamization of hardware in the field of biomolecular computation will probably change our practical understanding of algorithms and thus our models of computation. The Turing-border of computation might also hold for biological machines. But that is just not the crucial point, more important is the convergence of organic and mechanic models and what this means for our way of seeing machines.


There are three players in the game of computation: human thinking, semiotics, and material (hardware). In our classical model the semiotics (programs) and the human thinking (analysis, interpretation and handling of input/output relations) are the flexible constituents while the hardware is the fixed and invariant part of computation. Through the usage of biological processes the machines themselves become flexible and dynamic. As a result computing devices more and more resemble natural organisms. The upcoming machines are self-assembling and permanently restructuring on a physical level, nevertheless they do it on the basis of elementary operations. But their basic characteristics are closer to the concept of autopoietic systems than of mechanical devices. They are inherently non-terminating, massively parallel, stochastic, self-referential, adaptive, and self-modifying. Solving different problems is not a result of programming but of reconfiguring the initial conditions. If reconfiguration is limited the systems resembles more the setup of an experiment than a classical computer.