Vsual artist and software engineer, Helena Sarin has always been working with cutting edge technologies, first at Bell Labs, designing commercial communication systems, and for the last few years as an independent consultant, developing computer vision software using deep learning. While she has always worked in tech, Helena has been doing commission work in watercolor and pastel as well as in the applied arts like fashion, food and drink styling and photography.
But art and software ran as parallel tracks in her life, all her art being analog… until she discovered GANs (Generative Adversarial Networks). Since then generative models became her primary medium.
She is a frequent speaker at ML/AI conferences, for the past year delivering invited talks at MIT, Library of Congress and Capitol One.
Her artwork was exhibited at AI Art exhibitions in Zurich, Dubai, Oxford, Shanghai and Miami, and was featured in number of publications including the Jan 2020 issue of “Art In America” magazine.
With pretty much all of her 2020 exhibitions and talks cancelled or postponed until 2021,
Helena is using this time of lockdown as an opportunity to work on a few artists books, each featuring her AI artwork.
In 2018 art curator and long time SuperRare art collector Jason Bailey published Sarin’s seminal essay “NeuralBricolage” along with his analysis of Sarin’s work his post “Helena Sarin: Why Bigger Isn’t Always Better With GANs And AI Art” on Artnome. The post has now been read tens of thousands of times and Bailey has agreed to share an excerpt from the foreward for that article with SuperRare community below.
AI art using GANs (generative adversarial networks) is new enough that the art world does not understand it well enough to evaluate it. We saw this unfold in 2018 when the French artists’ collective Obvious stumbled into selling their very first AI artwork for $450K at Christie’s.
Many in the AI art community took issue with Christie’s selecting Obvious because they felt there are so many other artists who have been working far longer in the medium and who are more technically and artistically accomplished, artists who have given back to the community and helped to expand the genre. Artists like Helena Sarin.
Sarin was born in Moscow and went to college for computer science at Moscow Civil Engineering University. She lived in Israel for several years and then settled in the US. While she has always worked in tech, she has moonlighted in the applied arts like fashion and food styling. She has played with marrying her interests in programming and art in the past, even taking a Processing class with Casey Reas, Processing felt a little too much like her day job as a developer. Then two years ago, she landed a gig with a transportation company doing deep learning for object recognition. She used CycleGAN to generate synthetic data sets for her client. Then a light went off and she decided to train CycleGAN with her own photography and artwork.
This is actually a pretty important distinction in AI art made with GANs. With AI art, we often see artists using similar code (CycleGAN, SNGAN, Pix2Pix etc.) and training with similar data sets scraped from the web. This leads to homogeneity and threatens to make AI art a short-lived genre that quickly becomes repetitive and kitsch. But it doesn’t have to be this way. According to Sarin, there are essentially two ways to protect against this if you are an AI artist exploring GANs.
First, you can race to use the latest technology before others have access to it. This is happening right now with BigGANs. BigGANs produce higher-resolution work, but are too expensive for artists to train using their own images. As a result, much of the BigGAN imagery looks the same regardless of who is creating it. Artists following the path of chasing the latest technology must race to make their stamp before the BigGAN aesthetic is “used up” and a “BiggerGAN” comes along.
Chasing new technology as the way to differentiate your art rewards speed, money, and computing power over creativity. While I find new technology exciting for art, I feel that the use of tech in and of itself never makes an artwork “good” or “bad.” Both Sarin and I share the opinion that the tech cannot be the only interesting aspect of an artwork for it be successful and have staying power.
The second way artists can protect against homogeneity in AI art is to ignore the computational arms race and focus more on training models using your own hand-crafted data sets. By training GANs on your own artwork, you can be assured that nobody else will come up with the exact same outputs. This later approach is the one taken by Sarin.
Sarin approaches GANs more as an experienced artist would approach any new medium: through lots and lots of experimentation and careful observation. Much of Sarin’s work is modeled on food, flowers, vases, bottles, and other “bricolage,” as she calls it. Working from still lifes is a time-honored approach for artists exploring the potential of new tools and ideas.
Sarin’s still lifes remind me of the early Cubist collage works by Pablo Picasso and Georges Braque. The connection makes sense to me given that GANs function a bit like an early Cubist, fracturing images and recombining elements through “algorithms” to form a completely new perspective. As with Analytic Cubism, Sarin’s work features a limited color pallet and a flat and shallow picture plane. We can even see the use of lettering in Sarin’s work that looks and feels like the lettering from the newsprint used in the early Cubist collages.
I was not surprised to learn that Sarin is a student of art history. In addition to Cubism, I see Sarin’s work as pulling from… Read “Helena Sarin: Why Bigger Isn’t Always Better With GANs And AI Art” for the rest of Bailey’s foreword and Helena Sarin’s important paper “Neural Bricolage” in its entirety.