Aesthetics and Politics of Artificial Intelligence

MAT AIWG | Spring 2018, W/F 2-4pm | Elings Hall 2003

Description

This iteration of the MAT Artificial Intelligence Working Group starts from a basic hypothesis, put forward by Philip Agre in the late 1990s: "AI is philosophy underneath". Given the rapid development of the field since 2012, does this hypothesis hold?

When we talk about artificial intelligence today, we talk about highly specialized machine learning models. Unlike in the 1990s, the primary function of these models is not the mechanization of reason but the mechanization of perception, most prominently the mechanization of vision. As a consequence, the tasks that many machine learning models operate on are aesthetic tasks, ranging from the classification of images in regard to their content and form to the generation of completely new images.

100 randomly selected channels of the InceptionV1 layer Mixed_5b_Concatenated.

At the same time, the technical opacity of many machine learning models makes it inherently difficult to properly evaluate their results. This is complicated even further whenever a model is deployed as a product and opacity becomes a desirable property. In fact, the interpretability of machine learning models — their ability to generate and/or facilitate explanations of their results — has not only become an independent field of research within computer science but has also grown into an increasingly important legal challenge. Hence, the once speculative phenomenological question "how does the machine perceive the world" suddenly becomes a real-world problem.

Contemporary machine learning models thus raise a set of issues that are completely independent of the ones raised by the possibility of a future general artificial intelligence. Most prominently, they are real-life socio-technical systems that have politics. Adapting Agre's hypothesis: AI is aesthetics and politics underneath.

Participants in the working group meet twice weekly to investigate this peculiar nexus of aesthetics and politics in contemporary machine learning through equal parts of critical reading and technical reviews (of technical papers and code examples).

Syllabus

Please note that this syllabus is subject to change and will be updated before and during the spring quarter.  = article/book/blog post,  = talk,  = source code close reading.

Week 1: Artificial Intelligence as a Philosophical Project

Lecture notebook

Wednesday, April 4, 2018

Friday, April 6, 2018

Optional

Week 2: The Limits of Machine Learning

Lecture notebook

Wednesday, April 11, 2018

Friday, April 13, 2018

Week 3: Deep Dreaming

Lecture notebook

Wednesday, April 18, 2018

Friday, April 20, 2018

Week 4: Interpretability I: Feature Visualization

Lecture notebook

Wednesday, April 25, 2018

Friday, April 27, 2018

Week 5: Interpretability II: Definitions of Interpretability

Lecture notebook

Wednesday, May 2, 2018

Friday, May 4, 2018

Week 6: GANs

Lecture notebook

Wednesday, May 9, 2018

  •    Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, "Generative adversarial nets", Advances in Neural Information Processing Systems (2014), 2672–2680
  •    Chapter 4 only: Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, "Improved Techniques for Training GANs" (2016), arXiv preprint 1606.03498

Week 7: Word Embeddings/RNNs

Lecture notebook

Wednesday, May 16, 2018

Friday, May 18, 2018

  •   2pm: Rodger Luo and Sam Green, "Visualization for Deep Reinforcement Learning" (Elings 2003)

Week 8: FAT and Bias

Wednesday, May 23, 2018

Friday, May 25, 2018

Week 9: Interpretability III: FAT and Interpretability

Week 10: TBD

  • Possible topics: adversarial examples, RNNs, reinforcement learning, general AI

Further Resources