Some thoughts from Matthew Ritchie

Matthew Ritchie/ The question of what constitutes ‘scientific imagery and scientific content’ as specific terms separate from the representation of other forms of human enquiry is evolving – and therefore often poorly defined or indefinable. Even the term ‘scientist’ only appears in the early nineteenth century, in counterpart to the idea of the ‘artist’, precisely when it becomes clear there are many specific forms of science, just as there are many specific forms of art.

The goals, uses, materials and processes of science and art are not necessarily exclusive, but are often mirrors of each other. In both fields, a premium is placed on freedom of enquiry and instrumentalizing both physical and metaphysical data, all in the service of hypothetically reciprocal (but as often competing) social and theoretical ends. The differences in presentation and interpretation often lie as much in the chosen application of visualization technology and the expected terms of service.

Another way to think about it might be the more recent question of ‘image’ versus ‘file’. Most scientific imagery is produced for professional journals in the form of reductive or essentialized quantitative diagrams – tables, graphs, Muller plots etc; that ostensibly reflect an ‘objectively’ (see Galison & Daston for how this term has changed) procured dossier, or file, of source data, (the inherent scientific content), and the file of data is the nominal work product. In reality, many of these graphs are so graphically essentialized as to be almost useless scientifically – while the contextual information such as spatial co-ordinates, time and duration is easily manipulated to help the simple data visualization conform to the research premise. But the premise remains that the source information is directly accessible and subject to review.

In a few data rich fields, like astronomy, network theory, medicine, biology etc; more complex data visualization methods can produce more highly developed visual forms (or qualitative) images that superficially retain the initial informational topos – but here the contextual information is often stripped away, the essential keys to their scientific sources (such as spatial co-ordinates, time and duration) are removed and the primary file is doubly compressed, from a data table into a fixed image, now far more resistant to future informational decompression and more accessible to aesthetic compression.

Personally I appreciate the visual economy of the former as much as the complexity of the latter forms. But in many cases of the first type, efforts to instrumentalize one kind of knowledge, the primary database or file (assumed to be the ‘science’), with a simple visualization technology table, graph or diagram are assumed to produce an image that is ‘not art’ – primarily due to its ability to be decompressed or be ‘useful’. In many cases of the second type, the combination of a given visualization technology and the secondary database, or re-processed file (still science?) are assumed to more throroughly instrumentalize the second kind of knowledge, and produce another, tertiary kind – an image (art?).

In art, the mirror premise is that the subjectively (or at least authenticated by of the maker) procured source information, is, like the source data in a science experiment, somehow still directly accessible, either as ‘raw’ content or through material form. In reality,  Frederik Stjernfelt’s analysis of the sketch describes how the original terms of any given hypostatic abstraction are progressively obscured as they are aestheticized through a similar three-stage process. This should be no surprise, as all information accessible to what D’Amasio calls the ‘core-self’ must undergoes a similar three stage compression – to present the information in a form that is accessible through the viewers’ own theory of picture.

There is another, ongoing version of this discussion in the art world between the terms, ‘photograph’, ‘image’ and ‘file’ – all of which also produce ‘pictures’.  What we lack is a meaningful grammar to discover any distortions in the translation, or a gauge of the inherent mirror symmetries that might govern this process of aesthetic compression – or how information decompresses into and out of pictures – and so we don’t know how to evaluate the exchange between science and art. Sci-art seems to fall into this space.

So maybe one useful question is, can multiple forms of enquiry, research and knowledge production – whether, intuitive, deductive, scientific or artistic, be engaged and represented simultaneously in a way that allows them to be coherently transposed, decompressed and usefully compared inside a common theory of picture? A promising form to carefully consider might be the diagram, or ‘informational drawing’, which re-emerges in the 17th century as the essential tool of scientific research, and whose ability to concretize process can be clearly distinguished from Foucault’s concept of the ‘table’ or closed disciplinary array, by its ability to both cut across boundaries and to produce pictures of thought structures – or theories of picture.

Can this be done? Over the last few years, I’ve developed my own simple visual grammar of how these theories might interact diagrammatically, that visually unifies certain hitherto diverse approaches in the histories of science, aesthetic theory, network theory and ontological philosophy. This followed on from the premise that the dynamics of a complex system can be described as a walk, or game, drawn on a high-dimensional free energy surface and that more knowledge of the properties of the system can be obtained if one knows the distribution and properties of all the local minima of the surface. Just as there are minima for physical reality (the four constants), there are ontological minima for the human framing of knowledge and spatio-temporal minima for the useful proliferation of information within physical reality in forms that can be represented in the ontological framing of knowledge. However, most of the time, any system is confined in its deep local minimum as the high transaction costs of moving between ontological and temporal energy minima mean any new form trying to escape has a tendency to fall back to the local informational minima – hence the difficulty of defining a combinatory position between science and art!

Leave a Reply

Your email address will not be published. Required fields are marked *