conversations – Interview by Anika Meier – 30.06.2024
BARD IONSON: AI AND THE UNCANNY VALLEY
EARLY GAN AND AI
Influenced by a background in software programming and various tech roles, as well as a lifelong interest in the arts, particularly literature and philosophy, Bård Ionson discovered a deep connection to art during college. An encounter with Nam June Paik's sculpture INTERNET DWELLER in 2012 sparked a significant shift in perception. Inspired by Paik's approach of creating art from pre-made objects and ideas, Bård Ionson, previously considering himself more of a writer and coder than a visual artist, began exploring art creation through code, electronics, and eventually artificial intelligence. The experience with INTERNET DWELLER served as an epiphany, prompting a decision to embrace the identity of an artist and marking a transformative moment in his artistic journey.
Ionson experienced a moment of recognition while working on a new GAN series titled PAINTING WITH FIRE: A HISTORY IN GANS, celebrating the invention of GANs and their various output styles. The timing was perfect as there was a growing interest in GAN art, particularly in collecting the oldest GAN NFT pieces. He found it intriguing how his art was judged based on its age due to the blockchain ledger, which records the exact time of creation for each NFT. Collectors were drawn to the unique technique of GANs, contrasting with the current trend of using prebuilt models with word prompts.
In conversation with Anika Meier, Bård Ionson discusses early GAN art and the current status of AI art, training models, working with prompts, and shares how he transitioned from a software programmer to an artist.
Anika Meier: Bard, do you remember when you first heard about artificial intelligence?
Bard Ionson: It is difficult to remember exactly when I first heard about artificial intelligence. Artificial intelligence is something that science fiction had speculated about before it became a reality. I have always been fascinated by robotics and the automation of work. Perhaps being fascinated by such efficiency is an American trait. I grew up in Denver, Colorado, near my grandparents. My grandfather was very mathematically smart, and I would read all of his Popular Mechanics magazines, some of which he had saved from the 1950s. This magazine published articles about new inventions like flying cars, electric cars, self-driving cars, new airplanes, computers that could think like humans, and robots, among other things. This exposure was one of my influences.
I read many science fiction novels and watched shows like STAR TREK and DR. WHO, which featured a lot of artificial intelligence. Some of them even featured characters that were half-biological and half-robotic, like the Daleks in Dr. Who. The computer on the Enterprise is another example of a computer AI that could answer questions and analyze data through voice prompts.
Somewhere along the way, perhaps in Popular Mechanics, I read about AARON, created by the artist Harold Cohen, which was a robot that could draw pictures automatically. Of course, like computers and AI today, it was coded and trained by a human. However, this was an early form of artificial intelligence.
Due to my interest in computers and automation, I learned about the Deep Blue supercomputer by IBM, which held a public relations event in 1997 where it played chess with the Russian chess grandmaster Garry Kasparov. The AI computer emerged victorious, generating significant press coverage at the time.
AM: What were your thoughts back then about artificial intelligence?
BI: I found artificial intelligence interesting, but technically out of reach for me personally back then. At that time, it required a supercomputer and a specialized background in math and algorithms. My degree is not in computer science, which emphasizes math and low-level operating system coding. I have a difficult time manipulating the deep abstractions of calculus and linear algebra. I was better at simple math and logic, such as computing the cost of a three-minute call between New York and Bogota. I saw it as another tool to be used in attempts to predict future events.
Much of the artificial intelligence problem set focuses on predicting the future based on the history of past events. I rely on others to invent these algorithms to use in the software I create. The current advent of ChatGPT is based on what we call a large language model. Trained on billions of words in almost every language, it works by predicting the next word. You ask it a question, and then it combines the data in a matrix of decisions to come up with what word would come next, repeating that process over and over to answer your question. AI art operates in a similar manner. It is based on a system called a neural network that makes predictions on what pixel and color to create next, using the images it was trained with.
Throughout my career, I have grappled with the irresistible force of "nature" that technology seems to embody. We use it as a tool, but often society as a whole adopts it so extensively that everyone else must follow suit. An example of this is the automated teller machine and internet banking. In banks, there are fewer and fewer bankers at the counter as more people opt for machines and internet services. The individuals who prefer in-person transactions struggle to find a human attendant amidst this shift. The widespread adoption of the internet illustrates how many activities now necessitate a computer, a requirement that was not as prevalent before 1992. I do not have answers on how to balance the needs of individuals in the face of technology that they must use or opt out of.
AM: How did you get into art? If I am informed correctly, you are a software programmer.
BI: I am a software programmer and have worked as a network administrator, software tester, DBA, and early internet evangelist. But I have also been heavily influenced by the arts. Starting in college, I found the most enjoyment in my art classes, literature classes, philosophy classes, and the drama department. I was always creating something as a hobby, but it was typically poetry or short stories.
In 2012, I was introduced to a piece of art called INTERNET DWELLER by Nam June Paik at a resort. It was one of 13 variations of a sculpture made from TVs and playing video art created by Paik. The books at the resort described the arc of his career and the intent behind his work.
I learned about John Cage in college and was interested in the philosophy behind his ideas. The idea that anyone can be an artist from the FLUXUS movement was mentioned in the book about Nam June Paik. This idea came to me through the sculpture, too. It was something I could aspire to and not based on a skill I did not have, such as painting or drawing. I never considered myself a visual artist until 2012. I have poor hand-eye coordination, and spatial awareness is not something my brain does. The way Paik created with ideas and pre-made objects like TVs, cameras, and pre-made statues was something I could do. I was more of a writer and coder. So I began to create art with code, electronics, and then with artificial intelligence code.
The INTERNET DWELLER just really grabbed me. I would go to see it every day of the vacation. I could see the humor of the artist in it. In my own mind, I could hear Nam June Paik, or was it the sculpture INTERNET DWELLER saying, "You are an artist?". It was perhaps the essence of Paik in the sculpture. And that is when I decided to be an artist. I know it sounds hokey, but it was a sort of epiphany for me.
AM: When we see Nam June Paik references today, especially in Web3, there’s usually a connection to TVs. What impressed you about Nam June Paik?
BI: I spent months just researching Nam June Paik after my first encounter with his piece, INTERNET DWELLER, and looking for images of the other 12 INTERNET DWELLERS. The more I learned, the more I was enchanted by his work.
What pulled me in was the humanity and humor that were evident in the art. He made art fun and encouraged play. It was not just cold technology and high-brow stuffy art. There was an interactive element to much of his art. It was designed for people to touch and play with. (Of course, it is not exhibited this way anymore.)
I also became aware of his writings and papers about art. He proposed the building of the Electronic Superhighway, a term he coined, and asked for a grant from the Rockefeller Foundation in 1974 to build a computer network just for artists to collaborate on. He performed the first art piece over a world-wide satellite. There were examples of RC robots dragging violins around, video synths, magnets on TVs, and Buddha watching himself on a TV.
The TV seems to be an item of nostalgia in Web 3. I am not sure if it is because of Nam June Paik or a symbol of a simpler past. The thin screens of today are not so appealing to the heft and gravity of a CRT screen. It is still a versatile tool to manipulate or give an exhibition some visual appeal.
The body of the work of Nam June Paik encapsulated so many different aspects of my abilities and interests. I love to tinker with mechanical and electronic things. My skills at networking and computers were evident in the materials Paik used. Instead of using paint and drawing, he used junk, found objects, and assemblages of technology. It was a conversation about using technology. (Now I use AI to engage in a conversation about AI.)
And I started to try and make art using odd techniques with technology, such as running video tape between two VCRs, a prayer machine inspired by Paik’s interactive sculptures and Margaret Atwood.
AM: How did you get started creating art?
BI: At first, I tried to make art like his using old technology. This led me to the oscilloscope, where I could use sound to draw on a screen. It also scratched the itch of wanting to explore early computer history. The earliest computers used oscilloscope vector technology as an output device.
One thing kept leading to another, and I used what I learned in one medium to feed the next. I often reach back into my experience as a programmer to make art. It is a perfect fit for AI art because I have the ability to use skills I learned in computing the price of a phone call or searching a database to communicate an emotion or idea through art.
AM: And when did you get inspired to work with AI as an artistic medium?
BI: AI is very specific in its knowledge, as you have to make it specialized for a specific task. Over time, AI techniques like neural networks have become a little more generalized. So these systems were good at chess or drawing, but not both. But that did not inspire action in my desire to create.
It was not until I saw THE PORTRAIT OF EDMOND DE BELAMY in 2018 sold at Christie's for $432,500 by a collective called Obvious that I took action to make AI art myself. There was a controversy around the sale; another artist or programmer said his AI models were used to make the art. It was not until this point that I decided to take action and create my own art.
AM: Can you speak about the controversy around the sale?
BI: I should explain how this works. In the current state of artificial intelligence, a lot of data is used to teach or train a model. However, someone has to write the code to read all of this data and transform it into a form that can be memorized in some way. Subsequently, this code is executed to train the images into a model. This model encapsulates the essence of the training material in a matrix or a vector. Therefore, there are many different people involved in laying the foundational pieces, such as computer chips, algorithms, code, and research.
This situation led to a controversy regarding the PORTRAIT OF EDMOND BELAMY and the price it fetched. Robbie Barrat was dissatisfied with the lack of proper attribution.
Robbie Barrat, an artist from West Virginia, not too far from where I reside, utilized free open-source research code to create his art and made the models available for experimentation. The artist collective Obvious purportedly utilized this painting model without seeking permission from Robbie. The paintings used to train the model were sourced from the public domain and a website named wikiart.org. Robbie downloaded images to train the model using the code.
Hence, there was a lack of courtesy in terms of attribution, and it becomes perplexing due to the nature of releasing code and AI models on an open-source code repository.
Despite the controversy, I familiarized myself with Robbie’s models and the code he employed, which he had replicated from another code set created by an AI research team: Alec Radford, Luke Metz, and Soumith Chintala, who devised this process for training AI to generate images known as DCGAN. Through my interactions with Robbie, I also forged a new friendship as I developed my own AI art based on what he had built upon what these researchers had originated.
This situation does provoke thoughts about who truly creates the art and who should benefit from it. With numerous individuals contributing to different components, this issue is likely to persist with AI-generated content. Who is the author, and who reaps the profits? While it is collective knowledge, the contributors also require income to sustain themselves.
Consequently, I utilized Robbie's code called ART-DCGAN to train AI models on my own art that I had been creating over the past six years. Furthermore, I collaborated with Robbie to employ his models as a foundation and train on top of them, duly acknowledging him in the description of the art.
AM: The first AI artwork you made, titled THE ENTITY, is also a comment on the future of man and machine. It is a science fiction short story accompanied by NFTs. What was first—the art or the story?
BI: The image came first. In making the first AI model that became my first series of NFT art, I used my oscilloscope art from previous pieces and an installation of oscilloscopes. The model generated lots of strange symbols. One of them looked like an alien face. That one image inspired the story. The other symbols were an attempt by this entity to communicate with humans. ALIEN INTELLIGENCE was the first piece I released. This face of an alien entity is in one of the grid cells in the animated gif.
Then the story inspired new art. And I put that art into a new model later on that inspired more of the story. Now I have been using a large language model similar to GPT to help me finish the story.
It is the story of an AI entity from the future where all humans are extinct. The Entity travels back in time through computer networks to learn what they are and eventually to help humans regain their existence. (THE HODL FRAME OF NORTH BAY, CANADA). The art and the story deal with questions such as: Is technology (AI) evil? What is our place in relation to technology? Can AI become sentient? Who are we? What is real?
By the way, The Entity again showed up in NAKED FLAMES, my most current series that is part of the exhibition THE PATH TO THE PRESENT, 1954-2024 at EXPANDED.ART in Berlin.
AM: What are your answers to these questions more than five years later? Is technology evil? Can AI become sentient?
BI: I am not so sure about technology being evil now, but it is sometimes an irresistible power like a hurricane. I think we have to be able to adapt to it because, as a form of human collective consciousness, it is impossible to resist. At the point where a small group of people harness it to control everyone else, it becomes necessary to work to overcome one technique or technology with another. I am really thinking about someone like a dictator who harnesses his citizens or a cult leader who can direct their followers to create technology to force the dictator's will on the larger population.
I see artificial intelligence as collective intelligence. This is something I picked up from Holly Herndon and Mat Dryhurst, an artist duo that is based in Berlin and at the forefront of AI art and the impact it is having on culture and society.
It is the collective knowledge of whatever the model has been trained on. OpenAI, Anthropic, and many other AI software companies are building models that contain images and language in many languages. They can analyze an image and some text and analyze the relationship between them. It is a correlation of ideas across a massive corpus of information. It is not sentient. I am not sure if it will ever be conscious without some sort of mysterious, perhaps spiritual connection to the higher power, I suppose. We don’t know what sentience is. Maybe creating something that approaches it is how we learn what it is.
First, I guess these AI systems will need to be trained on how to train themselves. But who knows what will happen with a "robot" that can rebuild itself or build a slightly better robot? Perhaps the machine has to be made biological to link into the essence of life.
In my story about the AI entities from the future, they become sentient when the earth is destroyed and the essence of humans is fused into the AI war monsters that were built to defend the interests of the warring nations of earth.
AM: As you have just mentioned, you minted your first artwork on SuperRare when the platform was new in 2018. How did you learn about NFTs?
BI: This will be a very familiar answer to many of the artists starting in NFTs around that time. Jason Bailey (Artnome) had a company and blog about art and technology. He was building a system to catalog physical art so people could look up the specifications of art and find the history of prices.
He wrote a series of posts about NFTs, which he called CryptoArt. In one of the posts, he was searching for artists who wanted to put out art on the blockchain. He was searching for artists to put art on SuperRare. He described how they were having a very hard time finding any artists who wanted to risk this new way of selling art.
At the same time, I learned about Robbie Barrat, who was the first artist on SuperRare, and that was certainly interesting because I was exploring his AI code and models.
AM: Your short story that was connected to the NFTs was released on Patreon. Did you consider NFTs as a new artistic medium or as a way to get compensated for sharing art on the Internet?
BI: I consider NFTs to be both an artistic medium and a way to monetize my art. It is much more than just those things, as it provides delivery and provenance tracking.
Some art I create uses NFT as a way to monetize digital art. Before NFT, there was not a widely accepted way to signify value, provenance, or scarcity, not to mention that it had a built-in system of commerce and delivery. But I try to use it as a medium as well. Any art that is on a blockchain inherits some of the essence or aura of tokenization.
My first pieces I sold for $20 US dollars, so I was just happy that I made any kind of compensation at all. It was validating as an artist to know that someone would pay anything at all for my art.
But I have also used the blockchain as a medium for specific art pieces. When I learned about blockchain first around 2016 or 2017, I regarded the blockchain as a financial ledger or spreadsheet of global financial transactions. It was a system that promised this idea of moving money between people at a very low cost and without political interference. It was the idea of micropayments that attracted me.
I was building an art piece called SOUL SCROLL based on THE HANDMAID'S TALE by Margorate Attwood from the book. The women in the book are not allowed to work outside the home or to read. But they are also not allowed to pray to god. So they make calls to an automated machine on the phone. They are charged a fee per prayer. Then, in a remote room full of machines, print out the prayer on a scroll of paper and speak the prayer out loud. I built a version of this machine and needed a way for people to pay for prayers.
The artistic concept I had was to have a "system or technique" where the art was fused with monetary, political, and religious power. All of these powers were fused together to maintain the control of a small group of men.
So I saw the blockchain as an art form in itself to communicate an idea. Especially Ethereum with its smart contracts. It is possible to put code into the blockchain to automate the movement of tokens, money, and NFTs.
An example of this is WE ARE NOISE AND FORM. It is generative art, where the code that makes the image is placed inside the NFT smart contract. When the purchase is made, the code runs, and it realizes its form based on a random number. I ike to create art that is actually on the blockchain as a medium. But it is difficult to invent something compelling visually that uses the blockchain as a direct medium and that can be sold to an individual collector.
When I was first accepted to SuperRare, I saw it as a new way to get compensated. The NFT is a linked image, so to me, the medium is still a JPEG or GIF art. But fusing it with a transactional record also made it a token of value. As more of my art sold, it was an incentive to continue using the sales to purchase the tools I needed.
Again, I see NFTs as both an artistic medium and a way to generate a living. It is this strange fusion of both. I enjoy that it is transparent. When confronting the art, one is forced to consider what other people paid for it. Art actually becomes money.
AM: There’s a growing interest among collectors in early GAN works. How do you recall the past few years? What are some of your personal highlights?
BI: It was great to gain recognition, and I was working on a new GAN series at the time called PAINTING WITH FIRE: A HISTORY IN GANS, which was a celebration of the invention of the GAN (Generative Adversarial Network) and all the different output styles that could be created with the various evolutions of GAN. So it was perfect timing, with the interest from GAN art lovers competing for the oldest GAN NFT pieces while also having a new collection ready to exhibit.
It was interesting to see my art judged by its age alone. This is an interesting byproduct of the blockchain ledger. Every NFT has a time; it was created down to the second. As with any art, collectors decide to collect based on a certain technique or movement. Just like we have art lovers who collect cubism, there are people who enjoy this specific AI technique called GAN (Generative Adversarial Network). It is not as easy to create as the current trend to use word prompts from prebuilt models.
AM: How do you explain GAN to people who have never heard of it?
BI: GAN stands for Generative Adversarial Network. It is a technique and code for training a model on images that work in a specific way. A person has to gather up at least 1,500 images to go into the training. The GAN works by training itself in a loop by setting up two tasks.
One task or bot I like to call the artist. It takes sample images from the training set and draws them, starting out with random static. Then the other task, or bot, is the curator or judge. This curator judges the image and compares it to another set of samples from the training set. The curator gives a set of scores based on which parts of the image match the samples. This is why it is adversarial.
The scores are added to a matrix of probability neurons. This changes the way the artist bot draws the next batch of images. It makes more accurate decisions. This score also helps the curator get better at judging the images.
It loops around millions of times. As the person in control of the process, I watch the image outputs and stop the process when I see the results I want.
Interesting GAN art is somewhat difficult to make, so there is a rarity to the technique compared to the newer prompt-based systems that are being placed into almost every visual editing tool today.
AM: Why is GAN art difficult to make?
BI: GAN art is difficult to make because setting up the open source software for it is technically challenging to start with. It requires a GPU card that has sufficient memory to handle training the models. At one time, Runway ML had a hosted system for GAN model training, and that made it much easier.
There are different versions of GAN software that have been improved over time. The older ones, like DCGAN, have a distinct visual style. But the older the model, the harder it is to get the software to run because, like all software, it goes obsolete. GAN is no longer the most realistic way to make image models. The new diffusion technique used in Stable Diffusion and Midjourney is much more realistic.
I currently use StyleGAN 2 and StyleGAN 3, which can produce output at 1024 pixels. I have pushed DCGAN up to 512 pixels by modifying the code, but it is more glitch-like in its results.
The other part that makes GAN difficult is that the user has to gather images and curate them to create the effect they want. It is not just a matter of writing instructions or prompts.
AM: Can you tell us more about your latest body of GAN work, titled NAKED FLAMES that is part of the exhibition THE PATH TO THE PRESENT, 1954-2024 at EXPANDED.ART in Berlin?
BI: NAKED FLAMES is a new exploration of something I was inspired by in 2018 combined with the ideas from my last project, PAINTING WITH FIRE: A HISTORY IN GANS. The first piece Robbie Barrat released on SuperRare was art from a model of paintings of nudes from the 1700s to 1900s from the public domain wikiart.org site. I wanted to play with mixing these paintings of nudes with images of fire.
The fire images I curated one by one from public domain sources, my photographs, my oscilloscope fire drawings, and ones submitted by a few of the collectors of my art. I asked collectors to send me their photographs of fire. I included forest fires, camp fires, stove burners, cars on fire, and basically any image with fire in it. I drew my own line drawings of fire with an oscilloscope.
I was also playing with two ideas: fire as technology. Fire is a type of ancient technology that changed the way humans live. And the idea of the nude as the human frailty and the existence as a being that needs protection they create for themselves. In this way, I can clothe the human form with an armor of fire.
I also envisioned the fire as a way the ancients tried to predict the future or were inspired by stories of the gods. Because of our ability to interpret patterns in random events, the fire is again a symbol of technology and AI to me. I use it to inspire stories and predict things, so it is like its ancient ancestor fire.
Another driving force for me was a desire to make art with a hint of the figurative. But often, the models generate surprises. It often takes the concepts I have wrapped up in my symbology and produces something completely different. A goal of mine in this is to control AI and give the art being produced a human touch despite all the technology.
So out of the thousands I generated, I found the 20 most compelling pieces and refined them a bit more. And so I ended up with a fusion of a candle and a human, fire as some sort of cellular tissue, a fire god, various abstractions of the human body, personifications of fire, and some that remind me of Georgia O'Keeffe’s landscapes of New Mexico.
AM: How do you select 20 images out of thousands of images you create on the same topic? What are your criteria?
BI: I pick hundreds of them quickly and work my way to smaller and smaller batches. I sometimes keep generating images until I find something surprising. I look for those that make me think of something new, have an emotional response, or are just plain weird. I look for combinations of patterns and colors that I would have never thought of by myself. First, I pick ones that might fit my original concept for the piece, if I had a concept at all. Another criteria is whether the images tell a story that can hold them together. Of course, there are common artistic criteria for form and composition that I look for.
In the case of NAKED FLAMES, there was a lot of uniformity, and it was a matter of picking the best based on my personal preference. In this piece, I was looking for figures, faces, and bodies. But I also found some beautiful abstracts.
Sometimes I pick the candidates by moving the ones I like to a separate folder. Then I experimented with cropping. I use my partner and wife as curators as well. She is able to find some of the most original pieces that look so familiar to me after looking at 2,000 to 5,000 different images. She has been able to find many excellent images that I would have passed over.
AM: In your opinion, what defines good or relevant AI art?
BI: It is a subjective list of things to me. Let me see if I can list them and then explain them: Wabi Sabi, emotion, concept, technique, and sometimes humor, wit, and playfulness. But the time it takes to produce is not something that means much. Sometimes AI models can yield very interesting and compelling works of art.
This is something I really enjoy about AI—when something comes out that is surprising and such an original idea.
After working and seeing so much AI art, it becomes easy to see what is just a pure regurgitation of whatever the model was trained on. Finding something and then conceptualizing or humanizing it is what I find "good." I am careful with art that claims to be made by the AI automatically or is signed by the AI entity. All AI art has a creator, or rather, many creators. There is also a heavy dependence on curation. A human needs to pick out the best outputs. To say the AI did all the work makes it sentient, and that is a very hard thing to prove.
Wabi Sabi to me is the idea of imperfection; I prefer AI art that is not hyperrealistic. I love the uncanny valley and weirdness the older models generate.
Emotion: I enjoy AI art that has some sort of emotional component. Some of my art that has been most well received are those pieces that embody some sort of emotion from myself.
Concept: so the concept and the emotion are attached to objects or symbols, which is what I do. The model mixes the concepts together to generate a visual story of them. But I appreciate the concept and the idea. For example, Kevin Abosch’s art has a high concept value. As in fiction, where it is said that it is a lie that tells the truth, AI art is a fake that is trying to communicate the truth.
Technique is not always important, but with some artists like Pindar Van Arman’s robot paintings, it is about beauty and the difficulty of the work that make it interesting.
Humor and wit are often what I like in AI art.
Often, it is those who are trying to break the technology and force it to their will that are attractive to me.
With the current state of technology, it is nearly impossible to tell what images have been edited with AI. It is going to blur more. Using a normal photo, an artist can use things like Canva and Photoshop generative fill to add or subtract people and things from images in a few minutes. Enlargement of images is done using AI models trained on a library of images and textures. The question is going to be what is not AI art and what is real.
AM: How do you approach working on a new project or body of work?
BI: All sorts of different ways. I also use other mediums than AI, so sometimes I just want to create shapes or write code that generates images based on an object or idea.
Occasionally, I am just experimenting with a technique or modifying code to cause AI systems or tools like AfterEffects or Photoshop to glitch or produce strange outputs. As I am experimenting, an image forms on the computer that has a feeling or emotion in it in symbol form. I realize that that is an emotion that I did not know I was experiencing. Somehow, my subconscious feelings and thoughts are brought out by the activity of making art.
Most of the time, I am just playing. What happens if I combine this image with another image or take two different technologies and mix them together? What about old technology? What if I take analog film and combine it with GAN or other AI code?
Sometimes a group exhibition or contest will present an idea they want art to center on. Or I have a concept to explore. One of these was to make art with the concept of FACES, so I took images of gas masks and clocks and trained a GAN model. So the results were an uncanny mix of both. In one situation, I wanted to comment on the results of the 2016 election, so I grabbed images of Trump from the TV news and debates. The images were used to train a GAN model.
I do this by selecting the concepts as symbolized by a small set of objects. Objects that oppose each other. Like toilets and crucifixes, or rockets and crucifixes, and in this series, NAKED FLAMES, the nude and fire. But other artists put emotion into the art in other ways; sometimes it is in the title or description of the art.
For my video AI work, I keep an eye out for certain movements in the world, like a helicopter flying low over the Lincoln Memorial and tidal pool in DC. It moves from one side of the frame to the other. This works well with a technique called Next Frame Prediction, where the model is trained on time-based video images. Then you give it a new image to start from, and it attempts to predict the next frames over and over. To me, it can communicate the idea of entropy over time. Or order to disorder. Then reverse it, and you have disorder in order.
Sometimes I approach the art using another artist's work as inspiration and then work to add a new layer to it. I keep playing again until I have something compelling to look at. Or I keep running the resulting images through many different AI techniques I have learned to see how weird it gets. At some point, I found some images that were worthy. A recent experiment involved bones floating in the sky being trained into models.
Some of the artists and writers I have used as starting points are Georgia O’Keeffe, Robbie Barrat, Vera Molnar, John Cage, Elsa von Freytag-Loringhoven, and Duchamp, XCopy, Nam June Paik (of course), and Jennifer Bartlett.
After the experimentation and curation, it is a matter of refining the images to look the best they can using cropping and color adjustments. Or perhaps I just throw a lot of them back into another round of training and see what happens.
AM: What is your recommendation for artists who would like to start working with AI and release NFTs? Are there things you wish you would have known earlier or when you started?
BI: My recommendation to artists in regards to NFTs and working with AI is to build on what you know already and be willing to experiment with something new. As far as releasing art as an NFT, I am wary of giving too much advice without knowing the path of a specific artist. For some artists, I might say to try it out under a pseudonym first and learn the process. The best way to learn about cryptocurrency and NFT releases is to try them first with a very small investment. The power of using a blockchain was unbelievable to me until I tried it.
Invest in a good online software tool to track your cryptocurrency transactions; it will make reporting them for taxes much easier.
My advice for using AI is to experiment with it for a period of time to get used to it and how it fits into your practice. AI is being inserted into every image and video editing product right now, so many parts of it are very easy to use. But if you want to use AI in an experimental way to create something different that has your own AI aesthetic, you need to find the tools that let you do that. I am thinking of a tool like Runway to get started. I cannot recommend this to everyone, but those with a background in code, such as a generative artist, might want to install open-source tools like StyleGAN3 from the NVIDIA Labs github and install them on a computer with a GPU. Perhaps find an AI artist that can help you.
When I started out, I wish that I had done more research to find the latest papers on artificial intelligence. I could have used more advice on how to track taxes for cryptocurrency and trading in NFTs. It gets quite complicated to track all the transactions in order to pay taxes properly.
AM: What is your prediction for the future of AI and art?
BI: AI use in society and art will become ubiquitous, and we will not call what we use it for now AI. AI will be the next shiny capability, like artificial sentience or something.
With my wishful thinking, I predict that the collective knowledge of words, ideas, images, and videos being fed into training AI models from individuals living today will be compensated for.
Artists will continue to create new forms of AI, and it will become even more automated as new layers of intelligence are added to perform higher-level executive functions.
I also predict that there will be a market for art and artists creating art that does not use AI, or perhaps very limited technology. A few artists who can create by hand with materials like paint will continue. I enjoy doing this myself using older technology like oscilloscopes and film cameras.
I believe that companies and open-source AI engineers and researchers will start paying artists and writers to just create art and written content for the sole purpose of training AI better. I do this myself to train a unique and original model. I collect my own photographs. I am right now using film cameras to collect subject matter to train new models.
The blockchain and AI agents will be combined, giving these agents the ability to buy and sell content and services. So perhaps an AI agent will decide it needs a certain kind of content, and hopefully it has been taught enough ethics to pay for the NFT and not just right-click and scrape.
AM: Thank you for sharing your thoughts and ideas!