Skip to content Skip to sidebar Skip to footer

Source: artistik rezo 15-07-2020

Art follows technological developments; with the latest innovations in artificial intelligence (AI) technologies, artists have begun to experiment and find new ways to create works of art.

Over the past decade, AI has created paintings like Rembrandt, distorted portraits like Francis Bacon, written novels and composed music by analyzing what already exists. The famous auction house Christie’s recently sold its first work of art created by the AI for $432,500. It was a blurry face entitled Portrait of Edmond Belamy, directed by the collective Obvious. This is part of a new artistic trend, created by AI through machine learning. To make the latter, the creators introduced thousands of portraits to an algorithm by teaching him the aesthetics of past examples of portraits, and then the computer produced the painting. It attracted many negative comments: artists working with AI criticized the choice of auction house and the creators of the famous portrait saying they used only a small part of the existing art AI technologies. The major question in this debate is: can the painting be considered the product of a human spirit?

What is algorithmic art? It is a fairly new art form that requires the artist to write the detailed code with a desired visual result. One of the first attempts was made by Harold Cohen who wrote the program known as AARON in 1973. The program followed a set of rules determined by Cohen to produce a work of art. Later, he continued to develop the program, but what he essentially does is perform tasks according to the instructions of his creator. It is very different from what has happened over the last decade. New technologies tend to give programs more autonomy than before. Learning AI and the process of self-employment in the production of works of art occupy the top spot in the computer-generated art sector. The new algorithms are not written to follow a set of rules but are instead written to analyze and learn a specific aesthetic by scanning thousands of images.

One of these algorithms is called GAN (Generative Adversarial Network), it was designed by Ian Goodfellow in 2014. It is a bilateral algorithm: on the one hand a generative network creates candidates, works of art, after scanning thousands of images. The other part is the discriminating network that evaluates the candidates created by the former. The purpose of the generative network is to create images very close to real works of art created by man, in order to deceive the discriminating network and force it to make a mistake. This process requires initial training and feeding of image samples. The artist plays an active role in this process, selects the images to be fed, and then also selects one of the final results. It can also change, change the algorithm.

The images generated by GAN are very similar to the distorted faces of Francis Bacon’s portraits, but there is a slight difference between the two. Bacon has distorted the faces of his portraits to convey a message, the distortion is intentional, while on the other side, the GAN has done its best to produce appropriate human portraits, but the result is as if an error had distorted the faces. This means that the GAN has failed to imitate the human face. If you consider that the process is more important than the end result, you can really see the innovation made by GAN: this creative process (analyze – create – sort) is revolutionary, the artist and the machine collaborate to create a work of art.

There is also a second algorithm called CAN (Creative Adversarial Network) created by Rutgers’ Art and Artificial Intelligence Laboratory. The difference with GAN is that CAN learns existing styles and creates art independently. This algorithm is programmed to create original works of art, part of the algorithm analyzes the existing aesthetic, creates a work of art and the second part penalizes whether what is created is similar to the styles analyzed. So novelty is the main goal of this AI. The artist has no control over the final result.The main difference with the previous AI is that it learns styles. The user feeds it with 500 years of artwork (nearly 80,000 different pieces), then the algorithm tries to create a new work of art different from what it has digitized.

On the other hand, these algorithms cannot convey a strong emotional impact or a strong message. Human artists are directly inspired by their environment, people, places, politics and everything that happens around them. Works of art are created to tell stories in an aesthetic way. In the case of AI, a curator must make sense of the work that the machine cannot give. For now, they can only be considered as additional creative tools like Adobe Illustrator or Photoshop.

Baran Cengiz

Show CommentsClose Comments

Leave a comment

News ORS © 2020. All Rights Reserved.