The Artist Trevor Paglen, the Surveillance State, and Your Life
The MacArthur Fellowship Winner Has Unveiled AI's Inner Workings, and His Next Stop Is Space
- Interview: Charlie Robin Jones
- Photography: Christoph Mack

Trevor Paglen is moving studios. For years, he’s kept a workspace in Berlin’s Mitte neighborhood, in a flat once occupied by Laura Poitras, the documentary filmmaker who was one of the three journalists contacted by Edward Snowden to help facilitate the release of stolen NSA documents in 2013. So, it’s fitting that much of Paglen’s work probes the relationship between technology, power, and aesthetics.
He began his art career exposing the physical forms of the military-industrial complex, collecting patches worn by secret “Black Ops” squadrons, photographing the architecture of America’s domestic surveillance state, and producing Turner-esque prints of drones in flight against billowy cloud backdrops. Lately, though, Paglen has been seeing things never meant for human eyes—the colossal datasets of machine-made, machine-used images. In a recent exhibition at NYC’s Metro Pictures entitled “A Study of Invisible Images,” he presented a series of works that reveal, in various ways, how AIs see and interpret the world around them, and what implications that might have for us.
When I visit Paglen in early February, only the bare bones of his operation remain. In the middle of one table is Autonomy Cube, a sculpture made up of CPUs held in a vitrine that facilitates wireless internet access via the encrypted Tor network. A sheet of metallic fabric hangs by the window on a clothesline. “It’s aluminium mylar,” says Paglen. “Very similar to the material that the Orbital Reflector satellite project is made out of. It’s not, like, art, it’s just kind of a thing,” he explains with a laugh.
The Orbital Reflector picks up on another thread that runs through Paglen’s work—the urge to envision alternate futures. This summer, the 100-foot reflective diamond will be launched into orbit—on a rocket operated by Elon Musk’s SpaceX—where it will join the other satellites and assorted space junk circling the Earth. It’s a rarity in space: an exclusively aesthetic object, with no military or commercial purpose. “It’s a functional project, but it’s an imaginative project,” says Paglen. Until it’s burnt up by solar dust—Paglen figures it will have a six-to-eight week lifespan—it will shine as a new navigational point, visible as a pinprick of light in the night sky.
Despite his decidedly anti-establishment approach, Paglen’s begun to receive considerable mainstream attention. In late 2017, he was awarded a MacArthur Fellowship (colloquially referred to as a “genius” grant), he was invited to the 2018 World Economic Forum in Davos, and in July he’ll open a massive mid-career survey at the Smithsonian Museum, just a 20 minute walk from Capitol Hill in Washington, D.C.
Speaking precisely but slowly, he also has a disarming habit of going from extreme seriousness to cracking a joke on a dime. He’s got a sense of comic timing you might not have guessed from his work, and a stinging sense of the absurd that you probably had.

Charlie Robin Jones
Trevor Paglen
Much of your work is about looking at super important but rarely seen things—both at infrastructures that exist physically but are rarely visited, and, with your “Invisible Images” project, pictures that are not made for humans.
What I mean by “invisible images” are images that are made by machines, for other machines, as part of the operations of anything from a self-driving car, to an autonomous drone, to an AI that’s looking at whether you like to drink Coke or Pepsi on Facebook. I’ve done projects looking at things like spy satellites and internet cables in the ocean, but the “Invisible Images” project is trying to see what’s going on inside that satellite, or what’s going on inside that cable that’s under the ocean. What’s going on inside that data server at work, what’s going on inside the autonomous drone. That whole process is fundamentally invisible to humans, although it is an operation on an image, right? So, the tools I’ve been building here have been to translate this world of autonomous or computer vision into something that you can receive as a human.
A computer vision system makes a digital image and extracts some kind of information from that, whether that’s trying to figure out where the sides of a road are or trying to identify something as a face with facial recognition. It’s doing that digitally—taking your image, converting that into a series of numbers that represent, for example, certain color values, and trying to make a kind of fingerprint of your face. You can use that to estimate somebody’s identity. You can do basic facial recognition, because the proportions are going to be unique to every person. You can try to estimate somebody’s emotion, try to figure out if they’re smiling or not, or if they’re shocked or surprised. There’s one that’s trying to estimate gender—it looks at someone’s face and says, “In this frame she’s 85.11% female, and in this frame she’s 66.44% female.” You start to see some of the underlying biases that are built into these algorithms. Somebody has decided that there’s such a thing as 100% female, and that that looks a particular way. What is that 100%? Is that Barbie? Is that Grace Jones? Who decides that?
I just wrote the word “eugenics,” to ask the next question…
It’s more phrenology, but, yes.
It reminds me of the Stanford study that suggested a facial recognition AI could tell if someone was straight or gay. The notion that one could predict something as complex as human sexuality with an algorithm strikes me as something that goes back to the 19th century.
There’s a lot of that kind of thing going on in computer science and AI right now, and it’s really a big problem. Because, basically, these kinds of philosophies, and biases, and racisms, quite frankly, are being built into infrastructure. It’s a very urgent issue, actually.


What is the role of the images in this?
When you’re measuring and classifying what people look like, or if it’s an AI algorithm looking at a public plaza and trying to do facial recognition with people to figure out what products they’re interested in buying, these are all visual phenomena. We don’t have a good tradition in visual culture of thinking about this weaponization or operationalization of images. If you use an art historical approach to that kind of question, you are only going to get a very limited understanding of what’s going on. An analysis of power, of capital, of race are the kinds of tools that you bring to the conversation that are ultimately more helpful.
Speaking of which, in your work Even The Dead Are Not Safe, you chose to use the faces of Simone Weil and Frantz Fanon. Why did you choose these thinkers in particular?
These are pieces where I was using trained facial recognition software on the faces of dead revolutionaries and philosophers. It’s asking whether the development of these technologies will preclude people like Simone Weil or Frantz Fanon from ever existing again. We are rapidly moving towards a society in which one’s de facto liberties are continually modulated by our metadata signatures. In that society, it’s becoming increasingly easy to use these kinds of tools to make instruments of power more efficient, whether that is capital in the marketplace or whether that is in policing, law enforcement, and surveillance, you know? You could, tomorrow, build a city that automatically does facial recognition on everybody and gives everybody a ticket who is jaywalking. You. Could. Do. That. Tomorrow. What I’m getting at is someone like a Fanon or a Weil contributed to social progress precisely by breaking the law—because they were unjust laws. How does that work in a society in which tools like this work more and more in the favor of centralized political or law enforcement systems?
For me, underlying it all are the fundamental questions that concern me a lot: Who gets to define the meaning of things? Who gets to decide what female or male is? Who gets to decide how interpretations work? Political struggles for meaning are as much struggles for rights, for self-representation. Self-representation is representation. It’s a form of image-making, in a way.

“Beckett” (Even the Dead Are Not Safe) Eigenface, Trevor Paglen, 2017

Vampire (Corpus - Monsters of Capitalism) Adversarially Evolved Hallucination, Trevor Paglen, 2017
What did you think about the Google Arts & Culture app which matched faces to historical portraits?
I was terrified to see so many people giving their biometric information to Google. In exchange for an Instagram post! These are the kinds of stunts that are very easy to do with this technology, but I think about the underlying things that are happening—millions and millions of people enrolling themselves in facial recognition databases in exchange for a prize, which is it telling you what painting you look like. [Laughs]
Do you think about the urge to resist through technology?
There are tools you can create that are technologies of obfuscation and hiding, whether that’s encrypted text messaging or using weird makeup patterns on your face that are designed to defeat facial recognition software. The question is, do these create spaces where liberties get renewed in different ways? To what extent are there technological fixes to the ethical challenges that are posed by technologies themselves? Can you build unbiased artificial intelligences? This question of hiding—I am for doing that, I’ve built some of these things, but I don’t think these are strategies. These are little tactics that you can use. For me, these are things that show us what we want. I think it’s really important for us to be asking ourselves what kind of society we want to be living in. Do we want to have places in society that are not subject to surveillance, whether that’s Facebook surveillance or police surveillance? Or do we not want to have those places? Technologies that allow for the creation of these spaces, they’re things that help you to imagine or to articulate what kind of world you want to live in, but I do not think that technology can solve much on its own. I think these are ultimately questions that have to be solved at the scale of society itself through civic institutions, not through technology platforms or corporate platforms.
“You could, tomorrow, build a city that automatically does facial recognition on everybody and gives everybody a ticket who is jaywalking.“

Behind us is a piece of test fabric for the Orbital Reflector. Can you tell me about this project?
One of the themes that’s come up in our conversation is this question of not only how do you look at the world critically and learn how to see the environments and landscapes and histories that you’re embedded within, but also, how do we create glimpses of what we want? What kinds of futures can we imagine that are not dystopian surveillance societies where the entire climate is falling apart and artificial intelligence is used to try to manage political and economical crisis of land? Which is where we’re going! So, some of my projects try to do exactly that. The satellite orbital reflector project is like that in the sense of imagining a relationship to space that is not embedded within a tradition of nuclear war and colonialism, which is what I would argue space has historically been all about.
How was it working out how to launch a satellite?
It’s a lot of conference calls. [Laughs] A lot of spreadsheets. Building a spacecraft that is going to be reliable and going to do what you want it to do is obviously challenging. The way that you do that is you make it as simple as possible and you test it, and you test it, and you test it some more. But the launch is pretty straightforward: you pay a bunch of money.
What are your thoughts on working with the SpaceX program?
It’s, like, “Oh, SpaceX, Elon Musk is putting up your art.” And I’m like, “No, he’s not…" [Laughs] You write a check to the company and they do what they do.
It strikes me that Orbital Reflector is both a way of illuminating the fact that space is full of junk that will outlast the pyramids, but also as an opening up of space, to allow for something that might be… unintelligible.
It’s trying to make an object the exact opposite of every other thing it’s been. [Laughs] So much of the project has been about trying to not brand it, basically. Being very careful about where we take money from, being very careful about the kind of language we use. Really, honestly trying to do a project that is not an advertisement, that is not a military technology demonstration, that literally is as close to a kind of aesthetic object as possible. And that is actually really hard to do.

You’ve spent your career documenting the relationship between power and images. Have you found any change in the way you approach your work since the Trump election?
I guess, for me, this is a techno-surrealistic, gothic kind of moment. And I think that’s affected the way that some of the things I make look. I have a project where we’re training artificial intelligence. In AI training sets, you create an AI to recognize stuff that’s in your kitchen. Let’s say you make a list of all the stuff that’s in your kitchen—lemons, limes, forks and knives, plates—and then you give the AI thousands of images against each of those objects, and it learns how to identify them. We started training neural networks to see irrational things. We made one called “The Interpretation Of Dreams” that can only see objects from Freudian psychoanalysis. So, if you put it in your kitchen, instead of seeing knives, and plates, and forks, it will see puffy faces, and injections, and scabby throats. We also made one called “Monsters of Capital” that can only see monsters that have historically been allegories for capitalism. So, you can see like a Haitian zombie from the sugar plantations whose soul had been stolen, or the vampire in early aristocrat clothes that sucks your blood. I don’t think I would have used that kind of imagery a couple years ago, in the sense of stuff that’s so blatant.
You just got back from Davos. How was it?
It was ridiculous. [Laughs] It really is just a few thousand people around the world that all know each other. That’s very clear. But I met a lot of good people there, and was on panels with, you know, the chief of the UN Human Rights commission, and there are some people that are genuinely trying to work out how to make a more equitable world, and it’s nice to contribute to that conversation. I did feel like I was the only one there who openly said that I didn’t believe there were market solutions to social problems. My Davos quote I heard over and over again was, “Trevor, we’re so happy that you’re here. We like to hear alternative opinions.”
- Interview: Charlie Robin Jones
- Photography: Christoph Mack
- Images/Photos Courtesy Of: Metro Pictures, the Nevada Museum of Art, and Trevor Paglen