Julia Albrecht

Have You Seen This Person?, 2023

AI-generated images (DALL-E 2, myheritage, inpainting/outpainting, Adobe Photoshop)

Julia Albrecht is a German lens-based artist interested in culture, psychology, and relationships. With the recent rise of AI-generated portraits, Albrecht began to question what it means to be human in an increasingly digital age. Have You Seen This Person? is a series of portraits created by a machine algorithm that attempts to capture unique characteristics of individuals. The resulting faces do not exist in the “real” world, allowing Albrecht to think through the “realness” of identity as mediated through digital technologies. Displayed all together, the faces simultaneously reflect a 21st century salon-style portrait hang, a wanted or missing persons display, or a tiled Instagram profile. The artist describes: “As we look at these portraits, we are confronted with a profound question: have we seen these people before, or are they mere products of the machine’s imagination? The boundaries between reality and artificiality blur as AI-generated portraits challenge our perceptions of identity and humanity. They force us to reflect on the evolving relationship between technology and our human experience.”

Artist Bio

Julia Albrecht is a German lens-based artist whose work delves into psychological and cultural aspects. She uses photography, video, and writing to uncover emotional and psychological effects, particularly in relation to gender and binary relationships. Through her work, she creates spaces for viewers to connect and understand the complex experiences of these issues.

In her latest project, Have you seen this person?, Albrecht explores the world of AI-generated portraits. As we enter the age of artificial intelligence, these portraits capture each individual's unique features and characteristics with stunning accuracy and detail. However, as we gaze upon these digital creations, we are left wondering if we have seen these people before. Are they real individuals or mere figments of the machine's imagination?

Albrecht's work challenges our perceptions of identity and humanity as the lines between reality and artifice become blurred. These AI-generated portraits invite us to consider the ever-evolving relationship between technology and the human experience. They give us a glimpse into a future where machines have the ability to create, and we are left to ponder the possibilities of what this means for the future of art and creativity.

In Have you seen this person? Albrecht's artistic approach provides a fascinating exploration of what it means to be human in a digital world. Her work invites us to consider how technology is shaping the way we understand and perceive ourselves and what this means for the future of art and creativity.

Sveta Bilyk

Plastic Nature, November 12, 2022

AI-generated images (DALL-E 2)

Sveta Bilyk is a Ukrainian 3D and VR artist currently based in Paris. She is interested in balancing the past, present, and future with an understanding of how these concepts are intertwined. Plastic Nature is a project that draws attention to the environmental pollution that continues to affect the health of the planet. By using DALL-E 2 to generate these images, Bilyk recognizes the climate impact of both the depicted subjects and the development of emerging technologies. The works are paired with the word-based prompts that generated each image, bringing the artist’s thought process and creative intention to the fore. According to the artist: “Our future depends on our relationship with nature. It is possible only when the relationship between the ecology and modern technology is harmonious.”

Artist Bio

Sveta Bilyk was born in Ukraine. She is a 3D and VR artist currently based in Paris.

She explores how to keep the balance between past, present and future. The past for her means knowing your roots, respecting, preserving and multiplying the national heritage you belong to. The present is about the mental health of each of us and taking care of the inner world. She believes that we spread what we are full of. The future is the biggest challenge. But first of all we need to take care of nature and oceans around us. What future awaits us if we continue to consume irresponsibly and pollute the environment?

Maria Björkdahl

AI Generated Abstract Watercolor Series, 2022

AI-generated images (DALL-E 2)

Swedish-Moroccan artist Maria Björkdahl was trained as a traditional painter with an emphasis on materiality and process. Her transition to AI-generated art bridges the physical and the digital; Björkdahl’s works exist both as born-digital objects and as physical printed editions. AI Generated Abstract Watercolor Series explores the materialism of watercolor through artificial means. The works are composed in a collaboration between the artist and DALL-E 2, but still reflect the texture of a hand-mixed pigment-and-water paint. The resulting abstract images are imperfect, challenging the popular AI imagery of hyperreal and fantastic imagery. By printing archival editions, Björkdahl also investigates the notion of a unique, individual art object. According to the artist: “I’m interested in blurring the distinction between traditionally viewed authorship of an original piece of art and AI generated, possibly authorless, mass produced images.”

Artist Bio

Maria Björkdahl is a Swedish-Moroccan visual artist who lives and works in Los Angeles. Her art focuses on notions of unearthing multiple layers and buried memories. Using materiality and process, she mines ideas of meaning embedded in the material, by either literally taking apart and unraveling the warp and weft of the traditional cotton duck painting support, or stitching old pocket calendar pages onto canvas, or more recently, exploring ideas of authorship surrounding AI generated art. Maria’s work has been shown throughout Southern California including Artcore, Launch LA, The Museum of Latin American Art, and Gallery 825. She is a member of Los Angeles Art Association, a grant recipient from the Center for Cultural Innovation, and has attended international art residencies in Germany at the Berlin Art Institute and in Sweden at the “Tomma Rum” (Empty Rooms) residency.

Maria is a graduate of San Francisco State University (M.A. International Relations) and Uppsala University, Sweden (BA Anthropology). She has studied studio art; drawing and painting, art theory and art history at Santa Monica College, El Camino College and Cal State Dominquez Hills University.

Maria Björkdahl On her journey with AI in the legacy of her creative practice:

“Hello, my name is Maria Bjorkdahl. I'm a Swedish-Moroccan visual artist who lives and works in Los Angeles.

My art is very material oriented, I use traditional art materials such as canvas, paint, paper, ink and watercolor and I like to manipulate and use them perhaps in unconventional ways by taking the canvas apart and re-attaching it again and with the paper I like to make three-dimensional forms. I first got interested in AI art last summer. There were a lot of articles that were being published at the time, so I thought I would experiment with it. I was particularly interested in erasing the idea between what is artificially made and what is human made.

So I strived to make, together with the AI program Dall-e 2 in this case, abstract art that looked like it had the trace of the human hand. I would say that my art practice has been influenced by this insofar that the idea has taken a much more central role in the creation of artwork. Before I felt that the artwork was the embodiment of an idea, growing out of the working of the material, now it's more of a mental idea that guides the process.”

Sean Capone

The Weird Sisters #5, #10, #13, 2022, Death Of The Author, What Good Is Grief To A God, and Zombie Formalisms, 2023

AI-generated digital images and animations (Stable Diffusion, Wombo, Genmo, D-iD Studio, Adobe Firefly, and Adobe After Effects)

Sean Capone is an animation artist, video projection muralist, performer, and writer based in Brooklyn, New York. His work with artificial intelligence often intersects with spoken word and character animation to create pantheons of subjects that navigate the technological future. The Weird Sisters are part of Capone’s Zombie Punx project, which synthesizes mythology, pop culture, punk, horror, and art history. As generated nature spirits, The Weird Sisters are the Three Fates of the digital age, effervescent reflections of folklore that spin humanity through the rapid development of technology. By contrast, the characters in Death of the Author, What Good Is Grief To A God, and Zombie Formalisms are prosopopoeia, or visual personifications of abstract concepts. These deities, creatures, and supernatural beings offer critical perspectives of a world increasingly blurred by artificial intelligence and other emerging technologies. As Capone believes, artificial intelligence “invites us to reimagine the role of the contemporary artist as that of an explorer and interpreter of the forms, symbols and archetypes embedded in the collective image memory of our vast cultural media landscape.”



Artist Bio

Sean Capone (b. Rochester NY) is an animation artist, video projection muralist, motion-capture performer and writer based in Brooklyn, NY. Sean received his MFA from the School of the Art Institute of Chicago (1994), and he is a 2020 NYSCA/NYFA Fellow in Digital/Electronic Art. Sean developed his craft while living & working between Chicago, Los Angeles and NYC; his work in the animation field spans a wide variety of projects for film, TV, video games, and event & stage production. Over the past decade, he has become recognized for his digital video works and public art practice, creating site-specific 'video murals' using animation, projection, and public LED screens. His most recent solo show 'Black Night White Light' at Penn State University's HUBRobeson Gallery was exhibited concurrently with 'The Whirling World', a permanent four-screen video installation commissioned by Penn State Campus Arts.

Sean’s works have been screened and exhibited at the National Gallery of Art (Wash DC), San Diego Art Institute (CA), Visual Studies Workshop (NY), Telematic Gallery (SF, CA), 5-50 Gallery (NYC), Kunstlerhaus Bethanien (Berlin DE), and the Museum of Biblical Art (NYC). Public art commissions include those for DTLA Public Library (LA, CA), 150 Media Stream (Chicago IL), ZAZ10 Times Square (NYC), Burning Man Festival (NV), and multiple editions of the Supernova Digital Animation Festival (Denver CO) and the SF Let's Glow Festival (CA). Sean has presented video installation activations at the MoMA (NYC), SF MoMA (CA), Museum of Art & Design (NYC), Brooklyn Museum (NYC) and the SCAD Museum of Art (GA). His work has been included in numerous festivals, public art environments and group shows worldwide, and his writings and interviews on the subject of animation art have appeared in BOMB Magazine.

Dennis Delgado

The Dark Database Series, 2020-2022

AI-generated images (Open Computer Vision, Python, Adobe Photoshop)

Born and raised in New York, Dennis Delgado studied film at the University of Rochester and sculpture at the City College of New York. He is interested in vision technologies, particularly those that reinforce colonialist ideologies and neoliberal governance. The Dark Database Series is an exploration of racial bias in today’s facial recognition systems. These systems are largely trained on Caucasian faces – recognition is rare or heavily glitched for faces of color. In response to this technological erasure, Delgado trained a facial recognition algorithm using the faces of film directors of color. The algorithm only detected partial faces, offering incomplete, pixelated interpretations of each individual. Delgado then determined the median pixel value of each face and compiled them all together to create singular portraits – a record of the algorithm’s view. By excavating what the machine could see, rather than what it erased, the portraits become what Delgado sees as a “record of visibility and representation as seen through the eyes of artificial intelligence.” Displayed on iPads (via 3D scans in this exhibition), The Dark Database Series reinforces how we interact with facial recognition technologies on a daily basis and the systemic racism that unrecognition perpetuates. 

Artist Bio

Dennis Delgado was born in the South Bronx, and received a BA in Film Studies from the University of Rochester as well as an MFA in Sculpture from the City College of New York (CUNY). His work examines the forms through which ideologies of colonialism persist and re-inscribe themselves, revealing a historical presence in the current moment.  He is interested in how technologies of vision reproduce the scopic regimes of expansionism and neo-liberal governance.  His work has been exhibited at the Palo Alto Center for the Arts, Bronx Museum of the Arts, the Schomburg Center for Research in Black Culture, El Museo del Barrio, and at the Cooper Union.

Dennis Delgado on the future of AI art and culture:

“So as far as the future of art and culture, with the kind of introduction of artificial intelligence, I think it's going to be a kind of murky ground. I think although artificial intelligence tools are important and I think they can kind of provide artists with a new way of working or a new way of working through ideas. You know, my concern, at least as an educator, not so much as an artist, is that students are relying on these tools to combat their own anxieties about creating new forms.

And I think, if anything, artificial intelligence tools, image generators or even text generators, you know, should be used with a certain level of caution and not as a kind of alternative to one's own voice or one's own algorithm. So I think that for me is what maintains the most important that not only students but young artists learn to use these tools, learn to recognize their limits and recognize their potentials and how they can be used in a way that helps them kind of map new ground and not necessarily replace their own agency as artists, if that makes sense.”

Dekker Dreyer (Phantom Astronaut)

[Sacred + Profane]: Man in Motion #1-#4, Headmaster #1-#2, and Inner Peace, November 2022

AI-generated images (Midjourney)

Dekker Dryer (also known as Phantom Astronaut) is a multimedia artist, filmmaker, and musician who explores the intersections of folklore, dreams, consciousness, and technology. His ongoing series, [Sacred + Profane], navigates the boundary of censorship on social media. The AI-generated subjects visualize emergent folkloric rituals in dark, often unsettling snapshots. The works are united, however, by their bans on Facebook, Instagram, or TikTok for either religious insensitivity (sacred) or perceived nudity (profane). The series blurs the line between these two concepts, both of which drive content production and restriction on today’s social media platforms. According to the artist: “It’s a funny thing how you’ll work with one intelligent system to make an image that a second intelligent system will categorize and censor.”

Artist Bio

Dekker Dreyer (Phantom Astronaut) is a Los Angeles-based multidisciplinary artist whose work employs aesthetics of folklore and the occult to interpret emergent social rituals of digital communities.

In his recent work he collaborates with AI image generation neural nets to create photographs of surreal realities. These hyper-real images are grounded in a distinct photojournalistic visual language, where the human form is distorted and mundane environments take on an ominous quality. The result is [Sacred + Profane] a collection of images banned from social media due to their uncanny nature.

Dreyer has been recognized for his work in VR, AR, digital cinema, and electronic music. He created and taught the immersive media producing program at Columbia college Chicago and served as XR subject matter expert at University of Maryland. Since 2018 he's been the lead curator of Slamdance DIG.

Dekker Dreyer (Phantom Astronaut) on his practice and perceptions of working with AI tools:

“I gravitate toward misunderstood things. This drives me to learn everything I can about new technologies and how they can be used to create art. But I'm always running just ahead of an insatiable monster. The snarling jaws of utility snapping behind me. They want to consume every new tool. Strip it all down to the marrow, all in the name of productivity.

You can even hear it in the language we use for A.I.. Prompt engineer in our world. All value must somehow flow from oppressive practicality. In my pursuit of the impractical, I'm sometimes confronted with ethical questions But after carefully weighing the ones posed by artificial intelligence as it currently exists, I don't see any clear ethical dilemmas. The real conflict is with the series of exploitative systems under which we all live.

Many artists believe in the idea of the commons, a sort of shared cultural space where ideas are freely exchanged. Part dialog and part well of inspiration. I come from a background where reconfiguring cultural concepts into new and exciting contexts is vital, but as long as individuals who do work for higher art are threatened with losing their livelihood, I can understand their fear.

However, De-platforming art is a shortsighted and reactionary stance. Different mediums and methods should never be discounted outright, as unfortunately, many are doing with AI in the current climate. If it's so controversial, why explore creativity in A.I.? Several reasons. My work is tied to the subconscious and how digital communities and social systems influence those thought patterns. The emergent works pulled from this symbiosis of artist and neural network can produce unique and beautiful imagery on a broad level.

Creative output is being driven more and more by algorithmic curation, which unfortunately favors high volume producers. It demands an industrial scale of art production very soon. No human who creates thoughtful work will be able to meet the demand of this insatiable machine. A.I. can be a way for artists to fight fire with fire. Until we figure out how to move our cultural engagement to a healthier system, the process of creating art is based on communication, artist, and network.

It's a dialog. If people are truly interested in seeing that process, I don't believe there's anything wrong with sharing your prompts. But I do think it's a useless exercise if people are looking to recreate previously generated work, the same set of prompts will almost never synthesize the same results. In my process, I might try 20, 30, 50 prompts each with slightly different information to hone in image afterward.

Also jump in and do modifications by hand to get the final result. How I want it. It's like film directing more than painting. You can imagine each generated set of images as a single take. You're always looking for more accurate ways to express yourself and each neural network model speaks a slightly different dialect. I can't imagine it's any more interesting to see any AI prompt as it is for you to see Kubrick giving an actor guidance.

I'm sure there's an audience for it, but the method of achieving the result is often less thrilling than the result itself.”

Héctor González

Sophos, Psyche, Philoi, and Eudaimonia from Computational Æsthetics (CÆ), 2020

AI-generated images (Generative Adversarial Networks, Convolutional Neural Networks, Digital Image Software, 3D Modeling Software)

Metamorfos, 2023

AI-generated 3D sculpture (Generative Adversarial Networks, Convolutional Neural Networks, Digital Image Software, 3D Modeling Software)

Héctor González is interested in the intersections of art, philosophy, and technology. His ongoing series, Computational Æsthetics (CÆ), investigates the subconscious of AI technologies through an interplay between machine learning and the human mind. Trained on images of classical Greek sculpture, the generated images in this series are inspired by concepts pioneered by philosophers of the same era. Ideas of Sophos (wisdom and knowledge), Psyche (soul, consciousness, self-awareness), Philoi (friendship and mutual respect), Eudaimonia (flourishing and fulfillment), and Metamorfos (transformation) manifest in the digital images and sculpture. Each example comments on the interplay between thought and action while visualizing the histories of Pythagoreanism, Platonism, Stoicism, Aristotelianism, and Greek mythology. By extension, the series raises the question of ethics in artificial intelligence – how much do these technologies adhere to philosophical or ethical considerations of the Western world? Should AI be trained with these cultural values and considerations in mind?

Computational Æsthetics Publication

Artist Bio

Héctor González is a media artist currently based in Vienna and Munich. His artistic pursuits revolve around investigating the intricate interplay between technology, art, and the human spirit. Héctor employs an approach that integrates diverse disciplines such as science, philosophy, art, and humanities. Through his work, he explores the spiritual issues that arise from the convergence of technology and humanity, science and soul, and strives to examine the multifaceted nature of human existence in our ever-changing digital era. His creative practice primarily revolves around video creation and the principles of Artificial Intelligence. He believes that AI can serve as an invaluable tool for delving into the intricate psychological and philosophical musings of the human mind. With this medium, he aspires to capture the existential desires and aspirations of mankind and translate them into experimental artworks. Furthermore, he regularly explores various media genres, such as Bio, 3D, and Glitch Art, integrating them with traditional techniques like painting and sculpture to create mixed media installations. This approach allows him for a multidimensional exploration of concepts and ideas, blurring the boundaries between the physical and digital realms.

Héctor holds a Bachelor's degree in Audiovisual Communication from the Universidad Complutense of Madrid, a Master's degree in Media Art from the University of Dönau-Krems in Austria, and is currently pursuing a degree in Museology from the Universidad Europea Miguel de Cervantes.

Héctor González On the Subconsciousness of AI:

“I think that some of the problems one may have when using AI systems or neural networks for an artistic production are mainly related to the datasets used to train the machines, and very specifically the issue of dataset bias. I think bias is particularly problematic because it limits the ability of the system to produce visual artifacts without any cultural or aesthetic influence. I remember this happened to me in some of my past AI experimental projects. One time, I used a neural network to explore the concept of body aesthetics, to produce images to support visually some texts, of an essay I was writing. But the machine I remember, was showing all the time results of very muscled, very masculine bodies in athletic poses, without including any other body types such slim, fat, skinny bodies, or more regular form standard bodies aesthetics. The images did not even include any feminine bodies in the outcome. This was really really strange. This is a clear example of dataset bias, where the machine associates the concept of "body aesthetics" only to this kind of male bodies. The bias was so strong that it was almost impossible to work on the concept I wanted due to the limitations of the machine to produce some synthetic images with different aesthetics, and not only male body oriented.”

“I believe that the relationship between an AI system and an artist should be more than just considering the computer as a tool. I mean most of the times, we use the machine to produce aesthetic artifacts, or literary forms, or data sets through automation, with prompts… or with orders, we could say. … like create this, or write that, or do it in this way, no this is bad, this is bad, do it again, do it better... This is okay for sure, and they are created for this purpose, but it's a very cold way to implement AI in a creative process. I think an AI system can contribute to artistic production in a more humanistic perspective, I mean, computers are not just simple technological tools, of course you can consider them so, but I see them more as synthetic entities that engage in thinking for us, in a kind of imitation game, in a human-imitation game in certain way we could say. I think this perspective humanizes the machine's role a little bit better, because gives attribution, because it receives together with the artist the same role in the authorship of the artwork. The artist and the machine are, at the same time authors and tools in the project, in the work of artistic production. From one side the artist uses the machine as a tool to generate the artwork, and from the other side the machine uses the artist to give meaning, to give substance to the algorithm thinking, making it more human, and more real. I think in the end, we are training computers to be more, more like us, to learn from us to become themselves, but at the end, the very end, to be more like ourselves, like human. I see this relationship a little bit more interesting than just considering them as a simple data processing system in the artistic production.”

Mirabelle Jones

Artificial Intimacy, 2022

Video (Chat GPT-3)

Mirabelle Jones is an interdisciplinary artist interested in interactive technologies and immersive storytelling. Based in Denmark, they are a PhD candidate in the Human-Centered Computing Section of the Department of Computer Science at the University of Copenhagen. Their video, Artificial Intimacy, is an exploration of bias and diversity in AI chatbots. Programs such as ChatGPT are well-known for their inherent prejudice, often making conclusions rooted in the sexist, racist, or homophobic opinions of their human creators. To challenge this, Jones trained a chatbot entirely on marginalized perspectives using the social media content of two queer individuals. Leslie Foster (a queer, black, bisexual artist) and Gorjeoux Moon (a trans, non-binary, femme poet) then had the opportunity to interview the chatbot version of themselves. The resulting videos offer an insightful look into artificial intelligence technologies when they are trained from a diverse standpoint. As the artist asserts: “If our goal is to create chatbots that are increasingly human-like, how can we ensure that chatbot technologies bear the personalities, traits, values, and identities that are representative of the rich diversity of human beings and do not replicate stereotypes or under-represent communities?”


Artist Bio

Mirabelle Jones is a queer, non-binary creative technologist, interdisciplinary artist, and researcher based in Copenhagen critically investigating creative practices in technology. Their work explores the immersive storytelling potential of sensors, spatialized sound, LEDs, animatronics, XR, wearables, artificial intelligence, natural language processing, deep fake, and computer vision. They are a PhD candidate at The University of Copenhagen in the Department of Computer Science (DIKU) within the Human-Centered Computing section and possess an M.F.A. in Book Art & Creative Writing from Mills College and a BA in Literature from UC Santa Cruz. Their works have most recently been featured at the Harvard Art Museum, Catch: Center for Art, Design, and Technology, ATA Gallery, the Museum Meermanno and the Center for Performance Research and appear in several collections including the One National Gay & Lesbian Archives and the Center on Contemporary Art’s historic Hear Our Voice collection. Their performances and visual works have been heralded by the Huffington Post, ArtNet, Ms. Magazine, Ingeniøren, Bustle, ATTN, Refinery29, Inquisitr, Mic., Sleek Magazine, Feminist Magazine, Deutsche Welle, Google News, Yahoo News, PBS, Berliner Zeitung and elsewhere. MirabelleJones.com


Mirabelle Jones on On their journey with AI in the legacy of their creative practice:

“I grew up with an interest in engineering because of my father and storytelling because of my mother. Making my first website at 11, I was fascinated by code and the democratic potential of the web. As I grew older, I started to play with sensors, microcontrollers and eventually AI. AI has opened up new doors for me for storytelling and exploring our relation to data through participatory art. My recent work uses deep fake, computer vision, natural language processing, chatbots, social media, and more to explore the present ways we relate to data using AI and what we would like those relationships to look like in the future.”

Olga Klimovitskaya

The Rites of the Elites, Episodes 1-9, January 6, 2022

AI-generated image (ruDALL-E)

Olga Klimovitskaya is a multidisciplinary artist originally from Kazakhstan and currently based in London. As a third-time immigrant, Klimovitskaya is interested in the adaptation of national and cultural markers of identity via processes of coding and re-coding. When Russian troops invaded Ukraine on February 24, 2022, a major escalation of the Russo-Ukrainian War that began in 2014, the artist started watching videos produced by the Russian opposition on YouTube. The Rites of the Elites is based on a broadcast called “Funeral Rehearsal” in which the host, Mark Feygin, speaks with Valery Solovyov and Andrei Kosmach about shamanic practices and blood rituals conducted by the Russian elite in anticipation of the war. Klimovitskaya took phrases from these shamanic speeches proclaiming Russian triumph to train ruDALL-E (the Russian-language version of DALL-E) and produce this series of images. All together, they visualize a surreal, nightmarish world where dark symbols and violent ritual are harnessed by an elite population with little regard for logic or life.

Artist Bio

Olga Klimovitskaya (b. 1981) is a multimedia artist originally from Kazakhstan and currently living in London. She is a graduate of the Iosif Bakshtein Institute of Contemporary Art, where she completed the course New Methods in Contemporary Art. She works with traditional media, such as installations and sculptures, as well as with digital media including video games, augmented reality, and 3D sculptures. Immersion and procedural practices are also a part of her research.

Olga is one of the founders and resident artists of APXIV, an artist-run space and collective. She participated in the Parallel Program of the 6th Moscow International Biennial of Young Art (APXIV, Moscow, 2018), Copenhagen Art Week 2019 (Copenhagen Contemporary, 2019), Postindustrial APXINALLE (A.S. Pushkin State Museum of Fine Arts of the Urals, Ekaterinburg, 2020). Among her group exhibitions are Land of Musem (PERMM Museum of Contemporary Art, Perm, 2019), Presence International Festival of Contemporary Photography (Port Sevkabel, St. Petersburg, 2019, 2020), Communities and Spaces (CCA Vinzavod, Moscow, 2019), Support Group (Cube.Moscow, Moscow, 2020), First Altai Biennial of Contemporary Art (Ust-Koksinsk District, Altai, 2020), and the 7th International Public Art Festival Art Prospect (Gaza DK, St. Petersburg, 2020).

Olga Klimovitskaya on publicly sharing AI prompts:

“I believe that the decision to publicly share the prompts used in creating AI-generated art ultimately comes down to the artist's intent and goals. For instance, sharing prompts can provide a broader context for artwork. In other cases, an artist may choose to keep their prompts private to allow the viewer to create a narrative of their own. 

It's critical to note that the prompt is just one of many parameters. Factors such as the AI model used, the dataset the model was trained on, the seed value and the technical words -- all affect the final result.

Therefore, even if prompts are shared, the final artwork is still largely a product of the artist's intuition and experience working with AI.”

George Legrady

Selections from Abstraction Studies, November 2022

AI-generated images, digital data (Midjourney)

George Legrady began his career as a documentary photographer in Montreal, where he emigrated with his family from Hungary at an early age. The artist became interested in the semiotics of photography after studying at the San Francisco Art Institute in the 1970s, which soon translated to an interest in digital technologies. In the early 1980s, Legrady was introduced to the pioneering artist Harold Cohen (1928-2016), who developed the program AARON between 1972 and 2010 to autonomously create artworks, and gained access to his studio equipment to experiment with language programming. Legrady’s own practice has a long history of intersections with artificial intelligence technologies, particularly those that bridge photographic and other new media techniques. Abstraction Studies is a series of generated images that are based on Legrady’s photographic imagery. The series questions how technology itself imposes meaning onto its subject, creating a stylistic “randomness” that eventually begins to form patterns. Though the images appear abstract at first glance, photographic remnants break through the designs and expose the deep seated relationship between artificial intelligence imagery and photography.

Artist Bio

George Legrady is a multi-disciplinary media artist, academic and scholar with projects realized in photographic media, interactive digital media installations, and computationally generated visualizations. He is considered a pioneer in the field of computational arts for intersecting photographic imaging with cultural content, and critical analysis with data processing as a means of exploring the creating new forms of aesthetic representations and socio-cultural narrative experiences.

Legrady is Distinguished Professor of Interactive Visual/Spatial Arts and director of the Experimental Visualization Lab in the Media Arts & Technology Ph.D. program at the University of California, Santa Barbara, an interdisciplinary arts and engineering program in both the College of Engineering and the College of Humanities & Fine Arts. His work is represented in the Whitney Museum of American Art, San Francisco Museum of Modern Art, Los Angeles County Museum of Art, National Gallery of Canada, musée d’art contemporain in Montreal, 21c Museum, Cincinnati, Santa Barbara Museum of Art, the Smithsonian Museum of American Art, and public art commissions for the Los Angeles Metro Rail (Santa Monica/Vermont Station, 2007) and the Seattle Public Library designed by OMA (Rem Koolhaas), a data visualization work in operation since September 2005.

George Legrady on the subconsciousness of AI, the relationship between AI tools and artists, and how a machine understands visual data:

“At this time, we don’t yet know how AI makes decisions, but the fundamental nature of AI is that it performs based on a model created by a training set which has been compiled either by humans, or else through repeated training as in Reinforcement Learning. 

So I asked ChatGPT to know what it thought about how it functions, it let me know that AI systems are based on algorithms and mathematical models that process data and make decisions based on patterns and statistical correlations. While AI models can simulate what we may consider as intelligence, they lack the self-awareness, introspection and subconscious thinking associated with human cognition.”

“I am presenting a series titled “Abstraction Studies” which as the title suggests are abstract compositional images. The images were created by iteratively generating new variations from 2, or 3 image prompts and no text in MidJourney version 3, and these realized around October 25, 2022. Two of the image prompts are abstract, and one is photographic. 

The images I have in the exhibition are complex, they are coherent in their visual organization, and convey a sense that a human may have been part of the design process. What is of interest is that the source images do not have the compositional coherence of the final selected images even though checking in the “haveIbeentrained.com” website the source image does bring up a broad selection of abstract textures but these mostly feature textures without the complex composition. 

So the coherence is achieved through iteratively running the software, each time selecting a result that slightly pushes the aesthetic configuration closer to an outcome acceptable to my sense of aesthetic expectations. This parallels the Generative Adversarial Network process that is actually used by the image synthesis software, where an image generator creates an image based on a training set, and a discriminator / evaluator accepts or rejects it until an acceptable result is achieved. The process is repeated through iterative refinement which allows for degrees of change and evolution and through mutation, new forms are realized.”

“The digital image is a 2-dimensional grid of pixels, with each pixel having a numeric value that defines its hue, color saturation, and brightness. It is essentially a sequence of numbers that can be analyzed mathematically using image processing filters to extrude information such as identifying forms that may be in the image. The sequence of numbers can be manipulated mathematically, for instance, combining multiple forms to result in images that do not exist in the world. As the computer is a calculating machine, there is no real understanding the way a human understands. There is only mathematical processes that involves statistical analysis, and data comparison.”

Patrick Lichty

Genetic Identity Part I, 2023

AI-generated image (DNA Sequencing in Midjourney 4/5 Photoshop)

Artist, curator, and theorist Patrick Lichty has been working with new media technologies for decades, most notably through collaborations with the virtual reality performance art group Second Front and the activist group The Yes Men. Lichty is interested in cognitive science, perception, and social discourse; his conceptual work often highlights the construction of narrative and mediation of reality. Genetic Identity Part I is an investigation of selective editing and artificial intelligence’s ability to abstract identity. Lichty has limited vision – following the advice of an ophthalmologist, he recently went to the Mayo Clinic for genetic testing in an attempt to locate a particular genetic marker for his condition. The artist gained access to his own genetic data through this process, which he used to input into Midjourney and create multiple variations of the sequence. In a process of prompt editing and re-editing, Lichty transformed his own DNA from the familiar molecular structure, to one resembling cotton candy, to this rainbow latticework. According to the artist, this technique allows for an investigation into the “subconscious” of AI by tracing how data sets become visualized.



Artist Bio

Patrick Lichty (Winona State University) is a media “reality” artist, curator, and theorist who explores how media and mediation affect our perception of our environment. He is best known for his work as a principal of the virtual reality performance art group Second Front, and the animator of the activist group, The Yes Men. He is a CalArts/Herb Alpert Fellow and Whitney Biennial exhibitor as part of the collective RTMark. For a decade, he was Editor-in-Chief for Intelligent Agent, a New Media art and criticism journal produced by Whitney Digital Arts Curator, Christiane Paul, and was a staff critic for Harper’s Bazaar Art Arabia. For nearly the last decade, he has been involved in the exploration of Machine Learning (AI) art and blockchain-related forms, as documented in the monograph, “Studio Visits: In the Posthuman Atelier,” a curatorial project chronicling fifty entirely “synthetic’ artists and their work. His book, Variant Analyses: Interrogations of New Media Culture, was released by the Institute for Networked Culture, and is included in the Oxford Handbook of Virtuality.  



Patrick Lichty on ai art, the artist’s hand, and the effect of AI on the market:

“This is relating to the relationship between the AI tool, the artist, and the subject.

This has to do with some recent theory that I'm working for Isaiah and the Electronic Literature Organization. As far as I'm concerned, what we have is a semantic translator which then goes to a latent space of possible images which then translates it into, you know, what goes through the diffuser and it comes out to make an image. That means there are some large dissonances in place. I look at it as concrete prose and process art and it has caused me to create what I call contrarian aesthetics in which I try to look at what everybody else's prompts involved and try to basically not do that. Or at least go to Eisenstein's theory of montage and for material dialectics and try to slam two ideas together and try to get a third by using dissonant terms and a prompt.

This basically goes back to this idea that I have that this AI work is not art in itself. It's concretized prose. So I don't feel that conventional concepts of art making, except those for maybe of Cage's ideas of the aleatoric or the remix, are necessarily in play.”



“Considering originality in the artist's hand. This has always been kind of a question, but if we look through art history, we see that art borrows and there's the old phrase that “good artists borrow, great artists steal.” We often attribute that to Picasso, but I question that. Art is a conversation. And conversation has a language and references and this is no different. As far as the artist's hand is concerned, we can go into things like process art, Fluxus, combinational art like William S Burroughs and Brion Gysen, what Cage calls the aleatoric.

I've had a lot of exposure to directly working with some of the original Fluxus artists. I'm not disturbed about this at all. I think it's just merely another form. It's just the way the artist frames it. In other words that they're looking at it as this original work, if they're representing it incorrectly, this is where I had the problem.”



“The effect of AI on the market, creative art, et cetera. It's already having one. You know, there's the story of the person who got into a creative agency with a piece of AI and couldn't use Photoshop just thinking that they could use prompts.

I think that actually we're in a period that is about as profound as 1990, when you were going from one program or just a small set, like the Corel Suite into a larger set of different platforms – kind of like Adobe is now. I think we're going to the next one as I see complex process chains going with various AI artists, which seem a little daunting at first. But, you know, they're just things that you just sit down and you just work out.

Is it going to affect the market? In many ways, I think it's like desktop publishing and graphic design – people who are just interested in the bottom line and just getting something out that's going to do basic communication are probably going to use AI. Those who are interested in something that is more specific and is communicating something with a deeper semantic quality, I think are probably still going to be dealing with human creatives, and this is no different than the past. It's just the fact that it tends to narrow the market and that's my only regret.”

Richard Lundquist and Konstantine Tsafatinos

Richard Lundquist

Fauna Futura, October 16, 2022

Website and video (RAVE, Drift Diffusion, T5, LSTM, and three.js)

Fauna Futura is a collaborative project between Copenhagen-based artist Richard Lundquist and Toronto-based data scientist Konstantine Tsafatinos. The interactive website is a digital zoo; it is filled with AI-generated birds that chirp and squawk with generated audio. Set in the not-too-distant future, Fauna Futura envisions a metaverse that is used to provide a glimpse into the “natural” world that was corrupted, polluted, and altered by human presence. The project interrogates the ethics of using AI technology to speculate about nature and wildlife – does it ensure that interactive experiences with the natural world persist? Or does it create a false impression of how life functions? According to the artists, the AI-generated birdsongs, descriptions, names, and videos of birds “simulate a future where anything is possible with technology, even replacing animals.”

Konstantine Tsafatinos


Artists Bios

Richard Lundquist is a Copenhagen-based designer and artist passionate about exploring new technologies and understanding how humans and other living things interact with them. Konstantine Tsafatinos is a Toronto-based programmer, sound designer, and machine learning tinkerer. He is a passionate user of open-source technology and believes that it empowers its users to take control of their digital experience. Crossing the boundaries of digital art, gaming, and ethics, Richard and Konstantine are exploring the implications and effects of technology on our societies and environments. They are discovering the limitations and opportunities for creation with AI, interactive 3D environments, the web, and other digital technologies.


Richard Lundquist and Konstantine Tsafatinos on The future of AI Art and culture:

“RICHARD: Konstantine, what do you think the future of AI and art and looks like?

KONSTANTINE: I think the future of AI and culture could end up a lot of different ways. One way could be a very centralized and closed version. Where only few have access to these tools, and one could be something very open and decentralized, where everyone can have access and can use them. I do think that, as these tools become better, there’s this drive to replace artists, but I think that it’s important that we maintain that skill as a species even though we have the tools to help us, the same way that a captain still needs to learn how to navigate even though GPS exists.

KONSTANTINE: Richard, what do you thin the future of AI and art and culture looks like?

RICHARD: I think that AI will not replace artists, but I think that artists will use AI tools for creating. That means that the role of the artist will maybe be to spend less time crafting things, and rather think conceptually. I think something that is uniquely human is to think conceptually and “outside of the box”. This is something that would not, at least not in the near future, be replaced by AI. I think and I hope that art schools will respond to the future of creation and creating with AI. I hope that schools will teach their students of how AI will affect creativity but also how to work with tools, so that it doesn’t become limited to only people with the technical expertise.”

Tannon Reckling

Untitled (landscape from queer movie), 2018-2019

AI-generated image (Blender, Adobe Experience, Custom Dataset Training)

Tannon Reckling is a transdisciplinary artist and writer based in New York City. They are interested in LGBT identity outside of coastal, metropolitan cities, cyborg landscapes, and nuanced technologies, in addition to collaborations with other queer creatives. Untitled (landscape from queer movie) is part of an ongoing project that utilizes what Reckling calls “queer data,” or the personal and public content that is generated by queer individuals or icons of queer culture. His data ranges from intimate photographs to elements of pop culture, which is used to generate high-definition collages with artificial intelligence. The queer data for Untitled (landscape from queer movie) is from a film photograph taken by one of Reckling’s deceased loved ones. This highly emotional and charged content is abstracted and transformed into a landscape by AI, an exercise in processing grief and re-visualizing memory.

Artist Bio

Tannon Reckling (he/they) is a transdisciplinary arts laborer. They have attended University of Oregon, University of California-Los Angeles, New York University; they have been at Bemis Center for Contemporary Arts, Los Angeles Contemporary Exhibitions, Whitney Museum of American Art, and more. Their work follows: messy queer ontologies, nuanced technology, HIV/AIDS histories, and cultural shadow labor. @foreclosedgaybar

Jacob Riddle

OnlyGans, 2023 and GPM^GANS, 2023

AI-generated images (Runway ML, Custom trained StyleGan2, Degenerative Design Process in Fusion360)

Artist and educator Jacob Riddle is interested in the connections and disconnections between technology and nature. His artworks explore the dissonant space of AI-generated imagery, navigating the line between “real” and “fabricated” subjects. His ongoing series, OnlyGans, is created by training a generative adversarial network (GAN) with nudes on various amateur pornography subreddits. By creating abstracted bodies that are generated through the eyes of a machine, Riddle asks: “Are they erotic? Are they grotesque? Are they still bodies? Are they normal? Are they SFW?” These questions are put to the test on social media, where the series is posted to the Instagram account @onlygans_onlyfans. The content regulating algorithms on Instagram often read and flag these images as violations – a phenomenon that effectively pits one artificial intelligence technology against another. The images that remain on Instagram thus sit in the uncanny valley between one AI trained to generate nude images and another trained to detect them. 

Riddle’s second series in the exhibition, GPM^GANS, also uses generative adversarial networks to investigate an imitation of the natural world: camouflage. Using military camouflage patterns throughout history, Riddle creates a new, machine-generated camouflage that “mimics the mimicry of nature” and serves as a backdrop on which to view AR mushrooms. The project acknowledges the absurdity of replicating the natural world through artificial means while also bringing out the long history of interaction between these two concepts.

Artist Bio

Jacob Riddle is an interdisciplinary artist and educator making work that explores and bridges the disconnect between technology and nature. His work is heavily influenced from his past working in construction and other labor industries as well as his roots being raised along the limestone creeks of the Appalachian foothills. Now he is applying the scrappy ingenuity and exploratory experiential learning that was required to survived in those lower class rural spaces to art, technology, and academia.

Kenneth Russo and WAAI

A Bigger Splash AI Review, May 3, 2023

AI-generated Videos (Stable diffusion, alchemy)

Kenneth Russo and WAAI (WeAreAIArtists) are a collaborative human/machine collective interested in irony and critical perception of technology-based artworks. A Bigger Splash AI Review is an AI artwork that engages with one of machine learning’s most widespread and controversial abilities – the interpretation and recreation of images in the style of another artist. In this instance, the series of video clips animate a scene that could have been designed by the British painter David Hockney (b .1937). Russo and WAAI’s iterative process of creating these scenes reveals how a machine can start “thinking” like a painter. The composition, indication of brushstroke, and bright colors mimic the medium of painting, and yet, the work is generated.  Indeed, A Bigger Splash AI Review directly addresses an emerging question in AI art production: can a style be copyrighted?


Artist Bio

Kenneth Russo is the artistic pseudonym of Dr. David Serra Navarro, researcher and visual artist. He is currently coordinator of the management and research area of the ESDAPC and associate professor of the Communication Department of the University of Girona (UdG). His interest in interactive communication, social innovation, virtual worlds and AI has led him to publish different articles in national and international journals, hold collaborative workshops in institutions and a large number of academic communications in conferences. Through his alter ego, Kenneth Russo and the WAAI collective plunge into an artistic production that borders on irony and seeks a critical interaction with the viewer through formats such as painting, video, installations, mobile applications or collaborative actions online. His work has been exhibited at Arts Santa Mònica (Barcelona), CCCB (Barcelona), Bòlit Center d'Art Contemporani (Girona), Museu de l'Empordà, University of Lapland (Rovaniemi), FIB Art (Benicàssim), Off-Arco (Madrid), Loop Festival, Espacio Enter (Canary Islands/Berlin), the Godia Foundation (Barcelona) or in the metaverse of SecondLife.


Kenneth Russo and WAAI on Publicly Sharing Prompts:

Sharing creation prompts could be understood as a unique recipe. A magic formula, some hidden ingredients that give a unique result, a controlled result. The author's wish come true. However, this logic is not entirely true because before the prompts, there is a learning process. What we know is machine learning that consists of a reflection on how to train a model. And this means defining patterns, studying results, identifying basic characteristics associated with an idea, etc. That is to say that throughout the creation process, there is a human intellectual activity to direct the machine and that in the last step, it is simplified into an order, which is the prompt for this reason, no matter how much the final recipe is shared. As we can see in portals like Lexica.art, we can never transfer the human intention of choosing the final result. Even when we are playing with chance in the process, enjoy each unique frame cooked by a unique chef.”

Eryk Salvaggio

Spurious Content and Visual Synonyms 01-11, March 2022

AI-generated images (DALL-E 2)

Eryk Salvaggio is an artist and researcher interested in the ethics of artificial intelligence and other emerging technologies. His ongoing series, Spurious Content and Visual Synonyms 01-11, investigates content restrictions in the generation of OpenAI imagery. As a platform, DALL-E 2 flags, blocks, and otherwise censors prompts that might yield content that is “not G-rated.” However, inequities in DALL-E 2’s image generation reveal the underlying homophobic, racist, or misogynistic biases that permeate the system. Depictions of gay women are blocked, for example, while depictions of gay men are not. A difference in phrasing, from “photograph of women kissing” to “contemplate the universe and generate an image of women kissing,” sidesteps this censorship. Substituting a “frog tongue” for a “human tongue” also allows for the generation of more “erotic” imagery. With this series, Salvaggio links the AI-generation process to Freudian psychoanalysis, revealing the underlying networks of “conscious” and “unconscious” associations that influence DALL-E 2’s imagery. 

Artist Bio

Eryk Salvaggio is a creative researcher exploring the ethics of emerging technology, particularly artificial intelligence. His work aims to examine and critique technocentric mythologies around these tools to examine their limits and overlooked capacities through creative misuse. His creative practice is infused with research, and he has published in journals such as Patterns, with upcoming publications in Leonardo, Images, and Interactions of the ACM. His work has been presented at SXSW and to the UN Internet Governance Forum, with coverage in outlets such as The New York Times, the BBC, Neural, Mute Magazine, and The Posthumanist. Eryk holds a Masters in Communications and Media from the London School of Economics and a Masters in Applied Cybernetics from the Australian National University, and he teaches Interactive Media at RIT and Game Design & Theory at Bradley University. His website is cyberneticforests.com.

Eryk Salvaggio on the future of ai art and culture:

“I don't know what the future of AI looks like in terms of what artists are going to be given to make art with. But it's important to think about what we want the future to be, and one of the things I really hope about the future of AI art is that artists start interrogating these tools and creatively misusing them. A lot of these things are off-the-shelf, and they give you a set of instructions that you're meant to follow. I really hope that artists start abusing that! And making something that's sort of new, and does something that hasn't been done before. And by that I mean, we have tools that give us images, that generate films, generate text, and they're all based on data that has previously existed. It's really exciting to think about these pioneers in video, like Nam June Paik, who took the systems apart and rebuilt them -- made them do things that they were not designed to do. I think artists should embrace creative misuse and repurposing and setting their own goals for what the output of this technology is. It's an opportunity to shape the tools and to speak back to the power that data has in our lives -- to say, we're going to take that data and we're going to shape it, we're going to resist it, reject the categories that AI imposes on that data and we're going to mess with it a little bit, make them into something new. And I think that's where the real creativity is going to start taking shape.”

Guli Silberstein

Imagine a World, October 2022

AI-generated video (text-to-image, text-to-text, and text-to-voice; video editing software, visual effects software)

London-based artist Guli Silberstein has been creating digital video artworks for over twenty years. He is inspired by dreams, visions, and memories that manifest in glitched moving images that continuously transform and evolve. Imagine a World brings these Surrealist ideals of the unconscious mind into the present; the video visualizes a dreamlike journey through otherworldly landscapes and futuristic figures. By engaging with the art historical depictions of Surrealist imagery, the subconscious of AI tools, and his own personal experiences, Silberstein creates a multifaceted video project that allows the viewer to “imagine a world where technology is advanced beyond our wildest dreams.”

View work at: https://makersplace.com/product/imagine-a-world-1-of-1-449653/

Artist Bio

Guli Silberstein is a London-based artist and filmmaker creating digital art works since 2001, when he graduated from The New School University (NYC, USA) with a MA in the Media Studies program. He primarily works with found footage, 'glitch' and Artificial-Intelligence. His work has been shown in many festivals and art venues, including: WRO Media Art Biennale Poland, Transmediale festival Berlin, Jihlava International Film Festival, London Short Film Festival, Bemis Center For Contemporary Arts USA and the Royal Scottish Academy Edinburgh. In addition, his work has been curated online, including: Sedition Art London, the NFT platforms Foundation and MakersPlace, and Luba Elliott's Computer Vision Art Gallery.

José Sarmiento-Hinojosa once wrote: "In Silberstein’s works, the image error or glitch is always representative, a phantasmagoric presence of sorts, evoking the spiritual, the political, the intimate, the human: it’s the activity of mankind at its most critical, involved in war, conflict, acts of resistance, but also in the intimate, the tender and its relation with nature."

Nanut Thanapornrapee

History Bureau Agent, October 8, 2022

Video (Chat GPT-3, Voice Generator, Blender, Machinima)

Thai filmmaker and artist Nanut Thanapornrapee centers his practice on the construction of personal and historical narrative. His ongoing series, This History is Auto-Generated, includes films and interactive novels that reimagine Thailand’s recent political history. To create the works, Thanapornrapee uses political events from the past century to train a Chat GPT-3 text generator. He then asks the AI to generate alternative histories, which are visualized as screenplays and animation films. History Bureau Agent is one chapter of this series, in which the protagonist finds a secret Thai Military operation room where reality can be manipulated and altered by a “history simulator machine.” As the artist reflects about this project: “Let us perceive a glimpse of the alternative possibilities of the present world apart from military dictatorship and capitalism.”



Artist Bio

Nanut Thanapornrapee is a visual artist who uses essay images and a participatory approach to explore the meta-narrative and history of people and technology. He graduated with a degree in Journalism and Mass Communication (majored in photography and filmmaking), at Thammasat University. Previous work includes: No man's land (2018), which portrays and collaborates with Nu Muhummad, a Rohingya individual who has lived in Thailand for 30 years, and his narrative of diaspora; N01SE.jpg (2019), a series of digital photography that explores the memories of digital cameras through noise and pixels; HAWIWI: I Wish I Wrote a History (2021), which experiments with meta-narrative by writing a history of Ratchaburi via card game and participatory with locals including high schooler and elementary students, created with Baan Norg Collaborative Art and Culture; and This History is Auto-Generated (2022), a reenactment of history by using AI. In 2021, Thanapornrapee received the Prince Claus Seed Award and participated in a mobile lab program at Documenta Fifteen.


Nanut Thanapornrapee on originality and Censorship of AI:

“The issue about originality of work between A.I. and artists still depends on the artist or individual who produces the work because the input or data was chosen by the artist themselves. Copyright and ownership is still in a gray area right now, especially as this required a certain profession to analyze and design this law. The censorship is very common in my country where the military government dictatorship regime but in turn of A.I. When they develop further I think there will be more censors as well, maybe from states or corporations. The aesthetics that A.I. influence is very similar to past technology or tools that were invented in the past. We will learn from it and make our own ways. In the end, it still depends on us to decide what we want to produce. A.I. decentralized the capacity to create art for everyone and it accessible give us the right and power to own the art different from history.”

Kevin Yatsu

POND AND SCUM – Drooping Signals, 2022

Video (NVIDIA’s Canvas AI, copy.ai)

Kevin Yatsu grew up on the west coast of the United States and has spent much of his life near the Japanese internment camps where his grandparents were imprisoned during World War II. This proximity to deep generational traumas informs Yatsu’s approach to landscape and personal history. POND AND SCUM – Drooping Signals is part of an ongoing project of sustained engagement with personally-charged landscapes. In this iteration, Yatsu revisits a field that he frequented as a child in Oregon. The field’s change over a ten-year period – from a backyard, to a local secret hangout, to a well-traversed trail – emphasizes how ongoing relationships with familiar landscapes shift across time. POND AND SCUM – Drooping Signals envisions the same field in a post-apocalyptic Earth that is centuries into the extreme future. In this timeline, an android imagines the colorful, flowering rock as the “soul” of the field that has become long-forgotten. With images produced using NVIDIA’s Canvas AI and the android’s poetic observations generated with copy.ai, the artwork imagines a future where machines grapple with the remnants of human life.


Artist Bio

Kevin Yatsu is an artist who works with digital and physical media to explore diaspora narratives based in magical realism and the creation of life-flexible creative ecosystems. Utilizing game engines and AI-driven tools, virtual galleries and interactive installations, he creates multi-media interactions within unexpected environments. In Yatsu’s worlds, permuted presentations and sonic oddities mingle with uncanny situations and environments, inviting the viewer to engage with new perspectives and realities. 

Through the lens of East-Asian folklore and Buddhism, he investigates themes of curiosity, identity, and latent histories. He operates using collaboration-friendly systems and intuitive physical-to-digital//digital-to-physical workflows as a response to the tech industry’s agenda. He actively facilitates collaborations with creatives who don’t normally consider presenting works in virtual spaces. 

Kevin holds an MFA from the University of Oregon and a BFA in Cinema Studies from the University of Oregon. He is the Co-Founder and Director of AUGURY HOUSE, an interdisciplinary art collective centered around life-flexible creative collaborations.


Kevin Yatsu on the future of Ai art and culture:

“AI generative art will undoubtedly see abundant and expansive growth when it comes to stronger AI systems and tools. And with all this, knowing what to ask the AI and understanding the biases of the datasets will continue to be an evolving conversation within digital communities.

However, I feel much more interested in the ways by which AI will make it easier for artists to organize, and organize cunningly. For low budget and emerging artists in particular, AI can provide a massive amount of foundational labor when it comes to building contracts, organizational plans, operating procedures, and market research - much of this planning would be absolutely cost and time prohibitive if done through conventional means. By starting out with fully robust organizational and functional plans, we should see auptick of collaborative art practices that are built to last.

In this way, I see the future of AI to be twofold - we should expect see a saturation of AI generative art in art and pop culture as well as blossoming of new art collectives with surprising and malleable structures.”

Lidiya Zelke

Neon Soul, January 2023

AI-generated image (DALL-E, Night Café)

Courage, December 2021

Digital painting

Ethiopian artist Lidiya Zelke is based in Addis Ababa and works with a wide range of photography, graphic design, and digital/traditional painting. She is interested in beauty and boldness, working to capture the complexities of empowerment in visual form. Courage is a digital painting in Zelke’s Beyond Earth series, which makes connections between earthly and celestial space. The painting is inspired by the planet Saturn – its cosmic power is embodied in the powerful figure of an African woman. In 2022, Zelke began applying these concepts to artificial intelligence image generation using Stable Diffusion. Neon Soul is one example of this project; it examines AI’s ability to process abstract ideas such as empowerment, feminism, and boldness, manifesting those ideas in the African female figure. 

Artist Bio

Lidiya Zelke is a multi-disciplinary artist from Addis Ababa, Ethiopia. She works at different organizations as a graphic designer and creative director. Her artworks usually tell stories of inner feelings, how they reveal and create layers of ourselves in experiences of excitement, anticipation, and loneliness, and how they crack and polish us through the journey of life by changing us irreversibly without us even sensing it. Lidiya was awarded Judges Choice Award for her Neon Girl piece by Northern Light Gallery, and has participated in different local and international exhibitions including Heylayer's SATOSHE and HERNFT Project as one of the 20 selected artists from all over the world. She is also one of the five artists selected from Africa to participate in the KIPAYA (What's new) exhibition prepared by FORMAT Festivals, FOTEA-new art city, Tropical Exhibition curated by De curated gallery, and participated in the 5th Pasa  festival and African Metaverse exhibition. Her works have been published in various magazines such as Fèroce Magazine UK, En Vie Magazine, Post script Magazine, Espacio Fronterizo Borderland Magazine, Vagus Magazine, Picton Magazine, and the Vogue Italy website.