EventsAnnouncements

CASEY REAS in group show, Thin as Thorns, In These Thoughts in Us: An Exhibition of Creative AI and Generative Art

Date Posted
11.28.20

September 8 – January 2021

Common framings of emerging AI technologies are often drawn from the language of technocratic infrastructure, and from entertainment narratives that import ideas of domination and either/or logic. Either the machines win or the humans win. Either all is lost, or utopia is gained. This thinking comes from assumptions about a coming singularity, wherein AI transcends human capability, reflecting our own uses of power back to us.

For those working within the realm of AI, the notion that we will create a computer with the capacity for genuine human-like consciousness within the next few years is still fairly unrealistic. However, there is a general consensus that we may be able to discover emergent properties within AI systems that resemble consciousness or self-awareness, and perhaps over time, suggest varying degrees of sentience. Could a machine act creatively to produce something wholly original, something that its own programmers hadn’t thought of, or programmed it to do? Many believe that to be the true test of an AI system (aka the Lovelace Test), one that is far more rigorous than Alan Turing’s eponymous challenge. Yet, as the authors of the Lovelace Test admit, no machine has yet to pass that milestone as of this writing.

Nevertheless it is urgent that we begin developing a co-creative relationship with automated intelligence now. The vision and imagination of a diverse range of artists is essential in this cultural project. All of the artists in this exhibition have constructed such a relationship with AI systems. These practices bring forth new languages that will aid us in navigating our increasingly cybernetic world,

Thin as Thorns; In These Thoughts In Us brings together over a dozen contemporary artists who are now using AI to explore artistic production. These artists include Memo Akten, Sougwen Chung, Chris Coy, Claire Evans, Holly Grimm, Joanne Hastie, Agnieszka Kurant, Annie Lapin, Allison Parrish, Casey Reas, Patrick Tresset, Siebren Versteeg, Christobal Valenzuela, and Tom White. The exhibition also features artwork from two of the pioneers of algorithmic and AI artworks, Roman Verostko and Harold Cohen.

The exhibition’s title comes from a book of poems created through an AI system designed by Allison Parrish, called Articulations. Most of the artists in the exhibition, like Parrish, are assimilating machine intelligence into the framework of known artistic forms (i.e. drawing, painting, sculpture, poetry). Yet at the same time, these artists are also creating new paradigms of artmaking that upends traditional models completely, often by allowing machine systems to deliberately co-author some of their aesthetic choices.

The Chinese-born Sougwen Chung for instance, has enacted a number of live performances with AI systems, where she performs a “duet” with a robotic drawing machine that has been trained to create drawings and paintings in her own style. For her, the process is highly collaborative and allows her to explore what she describes as the “multi-agent body,” which is often central to her practice. As she recently remarked, “This feedback loop of the human/tool/system fundamentally changes the process of making. It suggests things to you and nudges you along. It complicates authorship, and it extends beyond creative pursuits to our day to day use of technology. Depending on your perspective, that's either exciting or uncomfortable.”

Like Chung, the Vancouver-based Joanne Hastie and the New Mexico-based Holly Grimm have also been using machine learning to teach their machines how to look at, understand, and mimic their own artistic styles—mostly by feeding their systems a vast array of their own artworks from paintings and drawings to personal photos. This represents a significant step for the field, given that previous experiments by early AI proponents have generally centered around the process of teaching a machine how to create artworks that resemble known historical paintings (i.e. van Gogh or Rembrandt). By turning the machine back on themselves, they can push their own aesthetic in ways they wouldn’t have thought of before. Hastie, for instance, often encourages the machine to paint over hand-painted backgrounds that she provides to the robot, while in other cases she allows it to suggest compositions that she then uses as inspirations for hand-painted canvases of her own. In each case she’s working in concert with a machine system that is constantly echoing her unique aesthetic choices. “I’m interested in the process of automating the ideation and composition of the painting,” she says, “and how that frees me up to explore much further.”

Like Hastie, the painters Annie Lapin and Chris Coy are also interested in the ways in which AI can work as a sounding-board/collaborator/interpreter. Coy uses an open-source neural net that has been trained to transfer the style from one source image to another. That allows him to map the decadence of the Baroque and Rococo onto later scenes of carnage as envisioned by Francisco Goya in The Disasters of War. By doing so, Coy and his team are able to explore how disparate materials can produce an echo of curious and troubling connections. The resulting images are then painstakingly re-created as oil paintings on linen. "I'm troubled by the ease with which we can set up chance operations and cannibalize the masters without the benefit of an ethics borne from accumulated effort. The fascinating logical leaps the machine makes manifest as painterly marks and create a perverse sort of new fantasy—some preemptive death rattle. A cautionary tale to be sure..."

Meanwhile, The LA-based painter Annie Lapin has found her own inspiration in AI’s ability to mine data, and yet her approach tends to differ from others in the exhibition. “As a painter, my interest in Machine Intelligence relates to the way I think about visual memory in the process of making and perceiving a painting” she says. In her practice, Lapin reimagines art historical and other visual-cultural material in her personal memory as a decontextualized archive of forms, like training data sets that teach neural networks to identify imagery without experience of it in the real world. Importing works in progress into photoshop, she engages in a process of free association and image generation which allows forms and meaning to morph and bend in a way which she likens to “the algorithmic pareidolia generated by a convolutional neural network.” In Catnose, through this ritualized imitation of AI, she arrives at painterly choices that her conscious mind might not have foreseen.

Other artists in the exhibition deliberately introduce more compromised data into the process to purposely affect strange aberrations and/or mistakes (Parrish, Casey Reas, Memo Akten). The LA-based Casey Reas for example uses neural networks to generate a vast number of images by carefully mutating the original source images—which are culled from specific cinematic works. The images then become a new film, which are not “edited” in a traditional, sequential way, but rather spatially, as Reas guides the system along different axes within, and through, visual elements. The end result is a record of both, the system’s attempt to reconcile the mutations Reas is interested in and the lyrical movement between data points.

Conversely the London-based Memo Akten uses algorithms to harvest still images from the photo-sharing site flickr, which have been tagged with such words as “love,” “faith” and “God.” The resultant work, Deep Meditations, uses machine learning to look at tens-of-thousands of such images, in an attempt to see if a machine might be able to understand some of our most abstract and subjective concepts. “As [such ideas] have no clearly defined, objective visual representations,” he writes, “an artificial neural network is instead trained on our subjective experiences of them, specifically, on what the keepers of our collective consciousness thinks they look like.” The result throws the viewer directly into the “thinking process” of the machine, where it appears to be grappling with such ideas.

Historically, literature was one of the earliest fields where computational media was exploited creatively— dating back to the 1950s (with antecedents in that many analog, chance-based, devices from the 16th century). For the better part of the past five years the New York-based Allison Parrish has been using AI to write poetry. Her process is similar to some of the artists mentioned above—beginning first with specific language models and data sets that she wants to explore (e.g., pronunciation dictionaries, historic poetry, the Bible, classic fiction, etc) and then manipulating the machine’s output through various systems of rules that she programs into the system. Most recently she’s been interested in statistical models of nonsense poetry and interpolation (which is where you create data to link two data points) to create highly expressive text and/or vectors.

Music is another area where the automating and programming of chance combinations have a storied history within the arts, both in analog experiments (dating back to the 1600s) and digital (dating back to the 1960s). Last year, the LA-based conceptual pop group YACHT gained considerable attention by releasing one of the first commercial music albums, Chain Tripping, to be made with AI. The process involved feeding the band’s musical history and lyrical influences into a series of machine learning models, which in turn generated countless textual and melodic patterns. YACHT subsequently arranged these patterns into new songs, which they performed and sang live—an effort that Evans claims profoundly challenged the band’s embodied patterns and assumptions.

As in music, one could argue that some of today visual experiments with AI have antecedents in some of the automatons of the 1600s, when the interest in automation led to devices, toys and musical instruments that could virtually play themselves. That took on new urgency during the Post War years, with the dawn of computational models, when artists began incorporating more technological means into their practices. The Japanese artist Akira Kanayama, who was part of the Gutai Group of the 1950s, was one of the earliest proponents of “robotic drawing” (Sougwen Chung often cites him as an inspiration). His primary tool was a small robotic drawing machine on wheels (aka a ‘turtle’). And now, some 60 years later, The Belgium-based artist Patrick Tresset is using his own combination of robotic devices and computer vision to create his own automatic drawing machines. These machines are able to draw figures in real-time, and often convey “moods” such as shyness or nervousness; and while they’re autonomous, they don’t “learn” in the true sense of AI, given that they often react to vision. But Tresset uses his practice to explore the ways in which robotic agents can act as actors with human-like behaviors. “Apart from the theatrical aspect of my work,” he writes, “the use of robots enables me to draw with varying levels of absence, distance, spontaneity and control.”

Human-like mark-making is also central to the work of Roman Verostko, an artist who was part of the first wave of computer artists using algorithms to drive plotters (digitally controlled printers) in the 1960s. He calls his practice “Epigenetic Painting” given that his custom software allows for a painting or drawing to grow within the system, to the point where “each unfolded offspring is a variant of its predecessor,” as he writes. By doing so, his artworks often approximates the gestural, human-like qualities of human hand-movements. As he writes, “the code underlying the robotic brushworks grew from my experience with Abstract Expressionism in New York in the early 1960’s, and I have always preferred to use Chinese brushes for applying the paint, which were originally given to me when I taught in China in 1985.”

Now, a generation later, the New York-based Siebren Versteeg explores his own algorithmic approach, often achieving an extraordinary level of complexity by creating a rich conceptual space for his algorithms to explore. He often creates systems that are able to produce an infinite number of painterly images and/or collages that announce their own somatic, gestural qualities. His approach is concerned with transmuting the static, indexical object into the realm of the ever-present, where “the artifact remains in a continuous struggle towards liberation from the corporeal,” as he writes.

Yet it is the legacy of Harold Cohen (1928-2016) that permeates the work within Thin as Thorns more than any other. As one of England’s most promising painters in the late 1950s, he shifted his focus in the 1960s to create a computer-based system that would automate the painting process entirely. The result was AARON, an automated system that could produce unique drawings and/or paintings via different robotic arms and/or devices. AARON, which is considered to be one of the earliest uses of creative AI, was also instrumental in pushing Cohen to develop his own system for systematically comprehending and deconstructing the nature of representations. That allowed him to imbue AARON with the ability to understand such painterly notions as spatial distribution, figure-ground relationships and the integrity of individual figures. What’s more, the machine’s creative and aesthetic output was never a copy of a pre-existing artwork, but instead, culled from its own cognition. “It was intended to identify the functional primitives and differentiations used in the building of mental images,” Cohen wrote.

Other artists within the exhibition are taking that further by exploring how algorithms are shaping our lives far more than they did in Cohen’s time. Based in New Zealand, Tom White has been attempting to teach neural networks how to draw. He does so by training them with data sets of real-world images (ie, animals, plants, tools, etc). But as they learn, he combines them with computer-vision algorithms and custom drawing systems, which allows him to produce minimalist abstractions, which may or may not resemble known objects. Yet when they are photographed and uploaded back into AI image-recognition systems (used by Google, Amazon, Facebook etc), they are “recognized” and attributed human-like legibility and definitions.

Agnieszka Kurant explores data mining and crowdsourcing in her piece Assembly Line. For this work, the artist worked with MIT’s Artificial Intelligence Lab to mine thousands of self-portraits taken by online workers all over the world. The collection was initially amalgamated into a single image, or what Kurant describes as a “self-portrait” of the largest growing working class in the world, before finally being transformed into 3D printed, nickel-and-copper sculpture. Once the sculpture is sold on the art market, the workers will share in the profits via a bonus system. In the process, Kurant is flipping the conventional model for AI’s usage amongst corporate American, which is designed to subtract information and value from the worker/consumer.

Finally, Christobal Valenzuela offers an artwork where viewers themselves can experience the effects of using AI, or more specifically, Generative Adversarial Networks, directly. His Generative Engine (Text2Image, 2017) allows any visitor to write textual descriptions of any scene imaginable while a machine learning model translates those records into unique machine-generated images. This piece attempts to reimagine and speculate with new creative formats and invites users to consider the role that an algorithm can play during a creative process. The grammars and primitives of digital creation are analyzed with the expectation of finding new machine-inspired formats. The piece was built with Runway, a next-generation creative machine learning platform, using the AttnGAN model created by Tao Xu, et al.

Indeed, AI is undoubtedly operating within a complex, conflicted space within culture, which may, on occasion, raise several socio-political, ethical issues. What’s more, by introducing such tools into the realm of art, these artists are also engaging in a number of art-historical continuums as well, ranging from the rule-based systems born out of Constructivism and later Minimalism, to the chance, recombinatorial practices of early Modernists. It’s also part of a larger trend toward systems or cybernetic aesthetics, where the emphasis is as much on the receiver as the author; more on the process of navigation than determining end points; and generative of endless variation instead of individual outputs. Yet by its nature, AI cannot fit neatly or easily into art-historical models. After all, it would be difficult to define it as either a medium or a tool. Ultimately it is both—in the same way that the AI artist/designer is equal parts author and spectator. Moreover, it encompasses far too much to be reduced to such basic, a priori, deductions. After all, AI is the great disruptor of our time and complicates virtually everything it touches. Yet for the artists in this exhibition, that idea might also be its greatest asset. For them it’s a behemoth of sheer potentiality that can, and often does, engender a great degree of self-reflection and self-discovery for themselves. It forces them to reflect on their own work in ways no other tool or system can, which ultimately results in a highly human experience—even though the stakes can be extraordinarily high.

HONOR FRASER
2622 S La Cienega Blvd.
Los Angeles, California 90034
Tel 310.837.0191 | Fax 310.838.0191
info@honorfraser.com
www.honorfraser.com
Tuesday – Saturday 10am – 5:30pm
By Appointment Only

http://honorfraser.com/?s=current