Embrace the Latent Space. Notes on the Curatorial Challenges of an Emerging Media Art Form

On October 25th, 2018, a GAN-generated image created by the French collective Obvious, sold at auction for a price of $ 432,500. A previous version had been sold to a private collector for a price of $ 10,000 some weeks prior. This created a massive outrage within the AI art community, mainly because the image in question was generated using source code created and published by others (Vincent 2018). While at the present moment the legal implications of this are unclear, an extensive debate about the integrity of the transaction started to unfold and, at the moment of writing, is still ongoing. All this suggests that the notion of AI art is currently being re-negotiated between different stakeholders: established media artists (e.g. Pierre Huyghe), emerging media artists (e.g. Helena Sarin), computer scientists with artistic ambitions (e.g. Alexander Mordvintsev), established protagonists of the “art world” like Christie’s, and finally investors, collectors, and people simply exploiting a blockchain-like money-making opportunity (Obvious). This negotiation takes place outside of any aesthetic considerations. The main discursive contribution of AI art, the question of machine creativity, is overwritten by the important but mundane question of attribution.

Left: Obvious, Portrait of Edward de Belamy. Right: Ahmed Elgammal, Faceless Portrait of a Merchant. Both works are simple samples from the latent space of a GAN trained on classical portraits, not even changing the square format that is both unusua…

Left: Obvious, Portrait of Edward de Belamy. Right: Ahmed Elgammal, Faceless Portrait of a Merchant. Both works are simple samples from the latent space of a GAN trained on classical portraits, not even changing the square format that is both unusual for portrait paintings and a technical requirement of many GAN architectures. Both works have been recently exhibited to great media response and have subsequently been sold.

We could of course treat these issues as the growing pains of an emerging media art form that eventually will develop proper modes of authorship and monetization, much like video art had to find ways to reverse its own subversion of the art market by means of producing limited editions of technically un-limited works. We could also attribute them to what has been called “GANism” (François Chollet), the over-utilization of one specific technical approach (Goodfellow et al. 2014) by many AI art protagonists. I would like to suggest, however, that these issues point to a deeper structural problem that not only affects AI art but many media art forms: the problem of display. How can we exhibit AI art without imitating the modes of display of more established art forms, for instance by literally displaying a GAN-generated image in a golden frame.

One reason for the problem of display is a lack of critical vocabulary suited to describe the relation between complicated technical artifacts, for instance the relation between a computer and an image, or, more generally, the relation between a thing and another thing. Hence, recent philosophical frameworks addressing the object-object relation could be said to indirectly address the problem of display as well. Specifically, object-oriented ontologies, if we take them seriously and literally (and with a grain of salt), can serve as a speculative principle for exhibition design, notably for the design of AI art exhibitions.

I use the term object-oriented ontologies as an umbrella term here to conveniently address a variety of really very different theoretical frameworks that nevertheless have the one common goal of re-investigating the object-object relation. I embrace much of the criticism brought forward against object oriented ontologies as a somewhat over-hyped academic trend, “the thing that happened after poststructuralism” (Galloway 2015), as Alexander Galloway writes. Specifically, I tend to side with Andrew Cole in arguing that the rejection of “correlationism” (Meillassoux 2010, 5) in many object-oriented ontologies relies on a selective and/or narrow reading of Kant.

This is of course not an entirely novel idea: the visual arts discovered object-oriented ontologies early. As one article half-ironically states: “For cutting-edge artists looking to lend their work some conceptual heft, Object-Oriented Ontology has become the faddish successor to such previous intellectual trends as structuralism and postmodern theory.” (Kerr 2016) This summarizes the problem well: as a theoretical framework for the visual arts, object-oriented ontologies often remain secondary and descriptive (and always a little pretentious), rather than informing any initial curatorial decisions. So, why bother then? Why not be pragmatic? Do we really need the constraints of an academic philosophical framework to address the problem of display? What I would like to suggest is to embrace one particular concept of object-oriented ontologies: the alienness of a thing’s perspective on the world. After all, if AI art is indeed supposed to raise the question of machine creativity, it is exactly this machinic perspective that should be explored.

In particular, I think it is worthwhile to consider the notion of “alien phenomenology” that I take from Ian Bogost’s practice-oriented flavor of object-oriented ontology (Bogost 2012), but which resonates implicitly throughout Graham Harman’s work as well (Harman 2011). In addition to Harman, Bogost derives the concept of “alien phenomenology” from Thomas Nagel’s idea of an “objective phenomenology”, developed in his famous essay “What It’s Like To Be a Bat” (Nagel 1974). The goal of an objective phenomenology, Nagel writes, would be to “describe, at least in part, the subjective character of experiences in a form comprehensible to beings incapable of having these experiences.” (Nagel 1974, 449) The only way to accomplish that, Bogost adds, is by means of analogy: “The bat [regarding its ability to perceive the world by sonar] is like a submarine.” This is, of course, the easy way out. The analogy conveniently releases us from the burden to think the unthinkable by letting the trope do the heavy lifting. However, in many ways, the bat is not like a submarine, and the analogy only works because it immediately falls back to the original problem. In other words: to say “the bat is like a submarine” only makes sense if we already know that the actual problem is the impossibility to think a thing’s perspective on the world. We are thus right back where we started, and reminded of the fact that, as Andrew Cole says, we might just as well “consult [our] local analytic philosopher, who will tell [us] that metaphysical mistakes are mistakes in natural languages” (Cole 2015).

bat.jpg

I would like to argue, however, that the “analogical approach” simply does not go far enough. That it does have merit if we approach the concept of analogy from a more technical perspective, a perspective more appropriate for our object of interest, the computer. The goal would be an alien phenomenology which is alien in the Brechtian sense, a defamiliarized, technical perspective which nevertheless has something to say about both itself and the real world. In fact, the latent space sampled by a generative adversarial network could be described as an analogical space where the produced literal images are also analogical “images” which, as a set, constitute an analogy of the machine’s perspective on the world. Unlike in the analogy “the bat is like a submarine”, instead of shifting all the complexity to the trope, a multitude of images empirically approximates the machine’s perspective on the world.

For AI art this suggests that we have to embrace the latent space: a work of AI art, through the lens of alien phenomenology, can only consist of the entirety of a latent space, of all the images we can produce from such a space: hundreds and thousands of images, interesting images, boring images, mode-collapse images, adversarial images – all of them. In other words: exhibiting AI art, or more precisely, solving the problem of display for AI art would mean finding a practical way of exhibiting entire latent spaces to make tangible the machine’s perspective on the world and thus raise the question of machine creativity. More elaborate works already do this by providing the means to explore latent spaces through a user interface, or by treating latent spaces more like video installations than mere resources for prints. A side effect of this approach will be exactly the impossibility to repeat the Christie’s scandal. If we can push the curatorial consensus that single samples from a latent space are not equivalent to a work of AI art, passing those samples as proper aesthetic artifacts will be, if not impossible, then at least much harder.

References

Bogost, Ian. 2012. Alien Phenomenology, or, What It’s Like to Be a Thing. University of Minnesota Press.

Cole, David. 2015. “The Chinese Room Argument.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta.

Galloway, Alexander. 2015. Assessing the Legacy of That Thing That Happened After Poststructuralism.

Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” In Advances in Neural Information Processing Systems, 2672–80.

Harman, Graham. 2011. The Quadruple Object. Winchester: Zero Books.

Kerr, Dylan. 2016. “What Is Object-Oriented Ontology? A Quick-and-Dirty Guide to the Philosophical Movement Sweeping the Art World.” Artspace.

Meillassoux, Quentin. 2010. After Finitude. An Essay on the Necessity of Contingency. Bloomsbury.

Nagel, Thomas. 1974. “What Is It Like to Be a Bat?” The Philosophical Review 83 (4): 435–50.

Vincent, James. 2018. “How Three French Students Used Borrowed Code to Put the First AI Portrait in Christie’s.”

Previous
Previous

What Could an Artificial Intelligence Theater Be?

Next
Next

A Syllogism in Turing’s 1950 Paper