A Syllogism in Turing's 1950 Paper

For the Working Group on the Aesthetics and Politics of Artificial Intelligence that I am teaching this quarter I recently had to closely re-read Turing's 1950 paper on "Computing Machinery and Intelligence". Scott Aaronsen famously maintains that "one can divide everything that’s been said about artificial intelligence into two categories: the 70% that’s somewhere in Turing’s paper from 1950, and the 30% that’s emerged from a half-century of research since then", and I very much agree with this sentiment. Among the many fascinating and clairvoyant arguments of the paper is a refutation of what Turing calls the "argument from informality of behavior".

Read more →

NIPS Interpretable Machine Learning Symposium Spotlight Talk


Paper: "I know it when I see it". Visualization and Intuitive Interpretability

Exhibiting Computing Machines. Alien Phenomenology as a Speculative Principle for Exhibition Design

When museums started to exhibit historically significant computing machines in the late 1970s, two major problems became immediately apparent: the problem of preservation and the problem of display. The problem of preservation asks: how can we preserve computing machines so their historical significance can be experienced by future generations? The problem of display asks: how can we structure this experience visually and spatially? While both the problem of preservation and the problem of display are first and foremost practical problems, they have latent philosophical implications. With Philip Agre, I propose that an investigation of these implications can not only enrich the philosophical discourse but also inform a critical technical practice which in this case is also a critical curatorial practice.

Read more →

Intuition and Epistemology of High-Dimensional Vector Space 2: Empirical Visualizations

In the digital humanities, scatterplots and similar visualizations have become such a common sight that they begin to look like the one natural way of translating high-dimensional data into Euclidian space, with the dimensionality reduction techniques behind them sharing this appearance of universality. And there are good reasons for this status quo. As argued in the previous post, the only geometrically intuitive space is Euclidian space, so there is no way around dimensionality reduction. Furthermore, while t-SNE and other dimensionality reduction techniques necessarily always distort at least some portion of the high-dimensional data, it is safe to assume that, at the end of the day, these distortions are negligible. However, there is a big caveat to these types of visualization, or rather to these specific combinations of numerical and geometric visualizations, that is often overlooked: they tell us nothing about the way in which they were obtained. In other words: the semantic structure of the visualization (relations, clusters, transformations etc.) bears no resemblance to, and provides no information about, the semantic structure of the algorithms used to create it. The tool is absorbed into the result, and thus the mediation of the result becomes invisible, or at least unattainable for any critical analysis.

Read more →

Intuition and Epistemology of High-Dimensional Vector Space 1: Solving is Visualizing

Vector space models are mathematical models that make it possible to represent multiple complex objects as commensurable entities. They became widely used in Information Retrieval in the mid 1970s, and subsequently found their way into the digital humanities field, a development that is not surprising, given that the above definition, applied to literary texts, is very much a description of distant reading in its most pragmatic interpretation. There is no doubt that vector space models work well, not only as a tool for distant reading, but also as a tool for more general natural language processing and machine learning tasks. Consequently, however, the justification of their use is often suspiciously circular.

Read more →