Skip to content

Bård’s Ai Artistic Journey

It started with Nam June Paik in 2012, led to video art, into glitch art, then oscilloscopes. Then I read The Handmaids Tale and created a Soul Scroll. This led me to micropayments and Steemit. I had been thinking about how to create Ai art but Obvious selling at Christies got me really interested in doing it for myself. This put me right in sight of Robbie Barrat, SuperRare and Jason Bailey in 2018.

I had been creating oscilloscope video art for a few years working on small video pieces where the alphabet converted into objects. I turned it into a physical art work of a hornbook that was activated in augmented reality with Artivive. I also created an oscilloscope art animation that ran on 6 oscilloscopes for an oscilloscope rental company. And oscilloscope segments from a failed commission for a musicians album art. With an oscilloscope you can draw images with sound, using computer software to convert an SVG shape into sound frequencies from the right and left channel an image is produced on the oscilloscope screen.

I don’t know who got interested in first Robbie Barrat’s project Art-DCGAN or Jason Bailey. I did not really understand they were already working together. But the sale of Edmond de Belamy really pushed me to get going on creating Ai art. I spent many hours trying to get Art-DCGAN running on an AWS instance when I had it running I just had to train it on my own images.

AI Generated Nude Portrait #1

AI Generated Nude Portrait #1 – Robbie Barrat

About this time I saw Jason Bailey’s blog post about SuperRare looking for artists who wanted to sell on the platform. I applied and was sort of accepted but did not hear back after I submitted a few samples. I resolved to use Ai art to get on SuperRare.

I took many of these animations of symbols and letters cut them into frames that I had been making. The art-DCGAN model only worked with 128 x 128 size images. These 20,000 frames became the training material for my first model. When I was done I was able to have the model produce some very compelling and low fidelity images. I wrote up a blog post and tweeted it out to Robbie and SuperRare. Robby was so happy to see someone do something original with his modified code, he gave me a glowing review.

Alien Intelligence – Bard’s first art on SuperRare

Why was he so happy? Well there was a connection between Edward de Belamy by Obvious and art-DCGAN. Robbie had produced models of historical portraits and nudes from the code he modified. Obvious had used these prebuilt models to create that art piece they sold at Christie’s. He wanted artists to use the code to create original work with inputs from their own corpus.

Edmond de Belamy by Obvious

This convergence of events also opened the door to be curated into SuperRare in November 2018

This was my first move into Ai art. Previously I had sold one print of scan art that I created using a water damaged scanner by removing the scan bar and waiving it over objects. And I had sold the commissioned six channel oscilloscope art. This became quite a move up to start seeing my art sell at a higher volume.

I went on to train a model on faces of Trump and images of coins and cryptocurrency logos. Then mixed nude paintings with faces of Trump all using art-DCGAN.

I also dabbled with training models in Playform and ended up with some great pieces for CADAF 2019.

This led me to using Pix2Pix next frame prediction to produce some very interesting videos from the output of the oscilloscope model from art-DCGAN in 2020.

At this point StyleGan had been out so I moved to it with a model of crucifix. I made one on art-dcgan and styleGan releasing a few from both. StyleGan was easier as I did not need to curate 20,000 images anymore. Then I was able to work with Lawrence Lee to train StyleGan on his life of paintings in July 2020. I created the model and selected outputs then Lawrence Lee would go fix them to make them better.

I also created models gasmasks, clocks, toilets, rockets and mixed layers from them. I have a series of rockets that are crucifix. In a more recent series I used public domain images of bugs and fighter jets..

From 2020 – 2021 I created using a combination of StyleGan, StyleGan2, Pix2Pix, Art-DCGAN, SpadeFace and other models.

In October 2022 I minted my first StableDiffusion piece, but I continue to go back to using StyleGan and Pix2pix for their glitch like effects.

Presently I have released new art pieces that I created using Stable Diffusion and StyleGAN called Simulation Number 89. Stable Diffusion is a prompt based model but I have originated prompts that produce a very unique style. I will often take the outputs and enhance them in Photoshop.

Methods

My methods involve corruption of the model training and mixing discordant materials. I like to take two objects that are symbols of ideas and then create models that mix them together. Then I throw in other images or my own drawings to confuse the model. I often throw in various oscilloscope drawings to provide some style continuation and create glitches or errors inside the model. I want to break the magic of the model to expose the flaws in it. This is often to say, “this is not real but you are real” or “do not believe everything you see”, Ai can produce many realities and we get to choose the path we want to walk. For me to be an artist is to make a choice, in the end I choose what is my art and what is not.

This leads me back to where I started. The original training material for my first Ai art work on SuperRare has just been put into an exhibition in the Oxford University libraries called Battledore.