Welcome to the Machine: Law, Artificial Intelligence and the Visual Arts
By Louise Carron.
On October 25, 2018, Christie’s auction house sold Portrait of Edmond de Belamy, a work of art made by Obvious, a collective of French artists, for $432,500, including buyer’s premium. The catch? This was the first sale of a painting generated through Artificial Intelligence (“AI”), where Obvious wrote the code for the computer to print a portrait on canvas with inkjet, which was estimated at $7,000-$10,000. The hammer price and the method of creating this work raise questions about the relationship between art and AI, and about the legal implications that arise from it.
Background of Artificial Intelligence
I propose to consider the question: ‘can machines think?’
– Alan Turing
AI refers to the process of simulating machine intelligence, where a computer is programmed to think like humans. The software learns automatically from patterns or features in the data presented to it.
Contrary to the recent hype, AI is not a new concept. AI is often associated with learning machines, which were developed during WWII to decipher military codes (think Alan Turing and “The Imitation Game” (2015)), based on neural networks, and the process of data to “find connections and derive meaning from undefined data.”[i] They have been used ever since, and evolved along with the current theories of the brain.[ii] Neural networks were “rebranded” to deep learning in the early 2000s,[iii] based on the theory that the brain processes information in layers, thereby allowing the machines to work faster and analyze more data.
Common applications of deep learning include image and speech recognition. Thus, at its core, AI is based on repetitive learning through large amounts of data to recognize patterns, it is a self-learning and generative process, that is used in many industries, from computer science to natural sciences, economics and health-care, retail, banking, and more recently, art.
Today: AI does not create art, but “generates” it
Art’s first big step into AI was in 2013, when Google launched DeepDream, an AI technology which enhances patterns in images, creating over-processed pictures with a dream-like hallucinogenic appearance.
AI as a Research Tool
Aside from using it for editing images, AI can be utilized in more innovative means such as an artistic research tool. One entertaining outgrowth of this is Google’s Art and Culture app which finds the user’s lookalikes in art history.
In 2018, Rutgers University Art and Artificial Intelligence Laboratory developed a program to teach the computer about the history, styles, genres, and techniques of art in order to identify works of art, thereby answering questions of authenticity and attribution.[iv] To test this capacity, the researchers fed the machines with hundreds of artworks without indications of author, date, style or artistic importance. Their findings revealed that “the machine encoded art history in a smooth chronology, without being given any notion of time […] the learned representations are clearly temporally smooth and reflect high level of correlation with time.”[v]
Additionally, as explained by Prof. Ahmed Elgammal, the director of Rutgers’ AI laboratory, “machine learning can play in the domain of art history by approaching art history as a predictive science to discover fundamental patterns and trends not necessarily apparent to the individual human eye.”[vi] It is possible to say that the program could also be used to detect fakes or identify authors of orphaned works.
AI as a Generator
AI can also be exploited to create art, or at least to “generate” digital images resembling art. The process commonly used allows the machine to analyze thousands of paintings, photographs, videos, texts, music, etc. and to generate something replicating, recreating, and blending the styles of what it “saw.”
The deep learning process used here, called Generative Adversarial Networks (“GAN”), relies on the interaction of two sub-processes. “The first (called the discriminator) has access to a collection of images (training images). The second (called the generator) generates images starting from random. The discriminator tries to excel in identifying real images from generated ones, while the generator tries to excel in generating images that fool the discriminator into believing that they are real.”[vii] As an example, the French collective Obvious used a “training data set of more than 15,000 portraits created between the 14th and 20th centuries” to have their algorithm create Portrait of Edmond de Belamy.
It is noteworthy that GANs do not retain information indefinitely, which is known as “catastrophic forgetting.” The learning of a new information requires the erasure of previous information and the current progress, a problem which researchers are trying to fix by teaching the computer to “learn what to forget.”[viii] Query, has the machine independently confirmed that art is a looping concept, a perpetual beginning?
Rutgers’ AI laboratory is currently working on its own creative algorithm. As explained by Prof. Elgammal, co-author of the 2017 paper “CAN: Creative Adversarial Networks Generating “Art” by Learning About Styles and Deviating from Style Norms,”[ix] the purpose of this project is to create a new system for generating art based on GANs.
The project is based on a circular GAN process where the generator, which does not have access to art, generates art starting from a random output, and simultaneously receives two signals from the discriminator – which has access to “a large set of art associated with style labels (Renaissance, Baroque, Impressionism, Expressionism, etc.).” The two signals are:
- Whether the “creation” is “art or not art”, and
- A “style ambiguity” signal that measures “how confused the discriminator is in trying to identify the style of the generated art as one of the known styles.”[x]
The generator will learn from the signals and improve its ability to generate art, and according to Elgammal “on one hand it tries to fool the discriminator into thinking it is ‘art,’ and on the other hand it tries to confuse the discriminator about the style of the work generated.”
To attain this result, the researchers fed the computer with 80,000 images of 15th-20th century Western paintings. As shown above, the generated images do not depict typical figures, genres, styles, or subject matter. “However, this is not because of its inability to do so. Simply, if we remove the ‘style ambiguity’ signal, the model can in fact generate images that looks like portraits, landscapes, architectures, religious subject matter, etc. The model in this case is trying to emulate traditional art. Adding the style ambiguity signal forces the model to explore the creative space to generate novel images that differ from what it has seen in art history,” explained Elgammal.[xi] Thus, this feature of style ambiguity is the key to original AI creation.
Although the products of AI are usually intelligible, they can be hilarious and scary, which is evident from the chapter of the AI-written Harry Potter book entitled “Harry Potter and the Portrait of What Looked Like a Large Pile of Ash.” This is where the limits of the machine become visible: even if it can make sure that the product answers the rules of English, song-writing, or painting in a certain style, only a human will be able to say if it makes sense. “While imitation is interesting, even commercially valuable, it’s not in the true spirit of art. It’s simply reflecting back to us what we’ve already said.”[xii]
Therefore, the next step for AI is to go from the generative to the creative, when the machines will be able to replace human creativity. But for now, AI remains a powerful tool to analyze existing works and set aside forgeries, to authenticate and attribute the works, by analyzing an artist’s work or style and discerning patterns, as is done by Rutgers’ AI laboratory.
Pushing the Boundaries of Copyright Law
Massimo Sterpi, an international IP expert, who bid at the Christie’s auction, said:
“Portrait of Edmond de Belamy is embedding all the current doubts and issues of algorithmic art:it is not even clear if it is protected by copyright (is its human element enough?) and, in the affirmative, who owns such copyright.”
Under U.S. Copyright law, a “work” is awarded copyright protection if it is original, a notion developed by case law.[xiii] In cases of AI-artworks, each painting is effectively unique; there is no way the machine can produce the same piece twice. But this does not mean that it is “original” for the purposes of copyright law, as the courts have interpreted the concept of originality as “originating from the author.” But then, who is the author?
The Ninth Circuit has recently ruled that the “author” must be a human person while ruling that a monkey could not claim authorship in a selfie.[xiv] Here, both the machine and the artist controlling the machine could be effectively perceived as the authors, as it is a collaborative creative process. While the “artist” (or researcher) selects the images, analyzed by the machine, the artist also controls the generative process through which the computer will “create” the artwork.
So, can a machine be the author? The answer could go both ways: on the one hand, the work is “produced by machine or mere mechanical process that operates randomly or automatically,” but on the other hand, the human artist (or engineer) behind the machine has the control over the data fed to the computer and the way it processes it.
It appears that US and most of EU countries would place the AI-generated work in the public domain, as they require that the work carry the mark of the personality of the author,[xv] if not originated from a human being.[xvi] As such, machines are usually not deemed capable of authorship, because they do not have a “personality” and are not capable of “creativity” – although the law lacks a definition of the latter. One source of dispute for this claim might be Sophia the robot, which became a citizen in Saudi Arabia, and whether that level of intelligence is sufficient to provide her with a personality.
In contrast, UK copyright law, section 9(3) of the Copyright, Designs and Patents Act (“CDPA”) states: “In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.” The copyright is therefore awarded to the human creator of the program who intervened in the creative process, regardless that the work was actually produced by a machine.
If copyright law cannot protect the AI-work, the “authors” may want to consider patents to protect the code underlying the algorithm. This was allegedly considered by Obvious (rebutted in an interview with Jason Bailey), but the process has the downsides of being long, tedious and expensive.
To take one step further, could the AI-work itself be a copyright infringement? Whether it is a song, a painting or a photograph, the work is made by mixing different works together to generate a new one. At what point is the AI-work so similar to the processed works that it infringes on the copyrights in them? It might be possible to raise the Fair Use defense in the U.S., to prove that it is “inspired” by and not specifically copying the style and content of the underlying works.
And what about resale rights? Obvious is a French collective, and their work may be sold on the European soil, where the droit de suite is an important feature of the art market. If copyright law is interpreted as granting the computer with the status of “author”, can the latter “claim” resale rights? Alternatively, if no copyright vests in the work, can it be exchanged on the market without worrying about resale rights? This would certainly please European auction houses and dealers.
Back to the Future
Why is the AI-made Portrait of Edmond de Belamy so important now? Is it worth the $432,500 resulting price? What does the future hold for AI art?
All these questions only prove that AI is in its infancy stage when it comes to art. And this results in a head-scratching hype. If you were shocked by the Belamy sale, know that you are not alone. Mario Klingemann, pioneer artist known for using neural networks, code and algorithms in his work, is not happy about the proliferation of AI-generated art: “Because [GANs] create instant gratification even if you have no deeper knowledge of how they work and how to control them, they currently attract charlatans and attention seekers who ride on that novelty wave.”[xvii] Art critic Jerry Saltz was “shocked, confused, appalled” by Obvious’ lack of artistic creativity: “These algorithmic programs and codes have been in use for a very long time to make the exact same looking things.”[xviii] Obvious was also criticized for not having written the code used to generate the portrait: in reality, Ian Goodfellow (translated to “Bel Ami” in French) wrote the formula, which is open-source, i.e. made freely available and may be redistributed and modified. Interestingly enough, the code was used as the author’s signature for the painting.
Others see it as a foreshadowing opportunity: after the sale, Massimo Sterpi noted that “the very high price is the result of a very well conducted marketing campaign, where it was presented as the first auctioned artwork of this kind: there are plenty of collectors that are looking for cutting-edge art and this was a unique opportunity for them.” It is also true that AI offers an infinite number of possibilities for innovation, which attract a growing number of artists.
“AI is just one of several technologies that will have an impact on the art market of the future — although it is far too early to predict what those changes might be,” said Richard Lloyd, the man behind the Christie’s sale.[xix] One is left to wonder: is the market reacting to a new trend, which is likely to die fast, and are we valuing human creativity less than we do that of computers? In a way, the Belamy piece lacks a certain aesthetic and is not breaking down any barriers other than allowing AI-art to enter the market.
However, instead of letting machines takeover, collaboration between AI and artists seems like an interesting compromise. Artists in the music industry have come up with “Algoraves,” which are interactive raves where DJs simultaneously write and project the code used to create the music they are playing. Another example of collaboration comes from the field of literary arts: The Day A Computer Writes A Novel (Japan, 2016) is an AI-written book which almost won the Nikkei Hoshi Shinichi Literary Award, for the literary and creative choices made by the team behind its conception. Ironically, or rather ominously for AI skeptics, the last sentence of the book says “The computer, placing priority on the pursuit of its own joy, stopped working for humans.” In the visual arts, artists such as Memo Akten, Tom White or Mario Klingemann have been using machine-learning as a creative tool, to share an artistic or political message.
These collaborations between humans and machines exemplify the necessity that artists retain their human creativity and sensitivity, because machines are unable to paint a landscape from memory, to write songs about a heartbreak, or to write an intelligible Harry Potter book (for the time being).
[ix] “CAN: Creative Adversarial Networks Generating ‘Art’ by Learning About Styles and Deviating from Style Norms,” presented at the 8th International Conference onComputational Creativity, Atlanta, Georgia, June 19–23, 2017. The paper can be accessed here.
[xi] Ahmed Elgammal, “Generating Art By Learning about Styles and Deviating from Style Norms”, id.
[xiii] Feist Publications, Inc. v. Rural Telephone Service Co., Inc., 499 U.S. 340, 347 (1991).
[xvi] Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884); Naruto v. Slater.
[xvii] Mario Klingemann, cited in T. Schneider, N. Rea, “ Has Artificial Intelligence Given Us the Next Great Art Movement? Experts Say Slow Down, the ‘Field Is in Its Infancy’”, September 25, 2018, available here.
- Heinrich Wölfflin, Principles or Art History The Problem of the Development of Style in Later Art (1915).
- Norton Rose Fullbright,“Protecting IP Rights, Artificial Intelligence in Australia”, July 13, 2018, available here.
- Andres Guadamuz, “Artificial intelligence and copyright”, WIPO Magazine, October 2017, available here.
- Jason Bailey, “AI Artist Gives ‘Perfect’ TED Talk as Cyborg”, Artnome, November 18, 2018, available here.
About the author: Louise Carron is the Center for Art Law’s Executive Director. She pursued her studies in France, for a double Bachelor’s degree in French law and Common Law and a Master’s degree in Comparative Business Law, before graduating with an LL.M degree from the Benjamin N. Cardozo School of Law. She has a particular interest in art law, IP and finetech, and she is currently being admitted to the New York Bar. She can be reached at email@example.com.