As a philosopher, my discussions of art and AI tend to be on meta-aesthetic topics, such as trying to define “art” or arguing about whether an AI can create true art. But there are pragmatic concerns about AI taking jobs from artists and changing the field of art.
When trying to sort out whether AI created images are art, one problem is that there is no necessary and sufficient definition of “art” that allows for a decisive answer. At this time, the question can only be answered within the context whatever theory of art you might favor. Being a work of art is like being a sin in that whether something is a sin is a matter of whether it is a sin in this or that religion. This is distinct from the question of whether it truly is a sin. Answering that would require determining which religion is right (and it might be none, so there might be no sin). So, no one can answer whether AI art is art until we know which, if any, theory of art has it right (if any). That said, it is possible to muddle about with what we must work with now.
One broad distinction between theories relevant to AI art is between theories focusing on the work and theories focusing on the creator. The first approach involves art requiring certain properties in the work for it to be art. The second approach is that the work be created in a certain way by a certain sort of being for it to be art. I will begin by looking at the creator focused approach.
In many theories of art, the nature of the creator is essential to distinguishing art from non-art. One example is Leo Tolstoy’s theory of art. As he sees it, the creation of art requires two steps. First, the creator must evoke in themselves a feeling they have once experienced. Second, by various external means (movement, colors, sounds, words, etc.) the creator must transmit that feeling to others so they can be infected by them. While there is more to the theory, such as ruling out directly causing feelings (like punching someone in anger that makes them angry in turn), this is the key to determining whether AI generated works can be art. Given Tolstoy’s theory, if an AI cannot feel an emotion, then it cannot, by definition, create art. It cannot evoke a feeling it has experienced, nor can it infect others with that feeling, since it has none. However, if an AI could feel emotion, then it could create art under Tolstoy’s definition. While the publicly available AI systems can appear to feel, there is yet a lack of adequate evidence that they do feel. But this could change.
While the focus of research is on artificial intelligence, there is also interest in artificial emotions, or at least the appearance of emotions. In the context of Tolstoy’s theory, the question would be whether it feels emotion or merely appears to feel. Interestingly, the same question also arises for human artists and in philosophy this is called the problem of other minds. This is the problem of determining whether other beings think or feel.
Tests already exist for discerning intelligence, such as Descartes’ language test and the more famous Turing Test. While it might be objected that a being could pass these tests by faking intelligence, the obvious reply is that faking intelligence so skillfully would seem to require intelligence. Or at least something functionally equivalent. To use an analogy, if someone could “fake” successfully repairing vehicles over and over, it would be odd to say that they were faking. In what way would their fakery differ from having skill if they could consistently make the repairs? The same would apply to intelligence. As such, theories of art that based on intelligence being an essential quality for being an artist (rather than emotion) would allow for a test to determine whether an AI could produce art.
Testing for real emotions is more challenging than testing for intelligence because the appearance of emotions can be faked by using an understanding of emotions. There are humans who do this. Some are actors and others are sociopaths. Some are both. So, testing for emotion (as opposed to testing for responses) is challenging and a capable enough agent could create the appearance of emotions without feeling them. Because of this, if Tolstoy’s theory or other emotional based theory is used to define art, then it seems impossible to know whether a work created by an AI would be art. In fact, it is worse than that.
Since the problem of other minds applies to humans, any theory of art that requires knowing what the artist felt (or thought) leaves us forever guessing—it is impossible to know what the artist was feeling or feeling at all. If we decide to take a practical approach and guess about what an artist might have been feeling and whether this is what the work is conveying, this will make it easier to accept AI created works as art. After all, a capable AI could create a work and a plausible emotional backstory for the creation of the work.
Critics of Tolstoy have pointed out that artists can create works that seem to be art without meeting his requirements in that an artist might have felt a different emotion from what the work seems to convey. For example, a depressed and suicidal musician might write a happy and upbeat song affirming the joy of life. Or the artist might have created the work without being driven by a particular emotion they sought to infect others with. Because of these and many other reasons, Tolstoy’s theory obviously does not give us the theory we need to answer the question of whether AI generated works can be art. That said, he does provide an excellent starting point for a general theory of AI and art in the context of defining art in terms of the artist. While the devil lies in the details, any artist focused theory of art can be addressed in the following manner.
If an AI can have the qualities an artist must have to create art, then an AI could create art. The challenge is sorting out what these qualities must be and determining if an AI has or even can have them. If an AI cannot have the qualities an artist must have to create art, then it cannot be an artist and cannot create art. As such, there is a straightforward template for applying artist focused theories of art to AI works. But, as noted above, this just allows us to know what the theory says about the work. The question will remain as to whether the theory is correct. In the next essay I will look at work focused approaches to theories of art.
By-the-by: is that characterization, if it represents Tolstoy, real, or is it AI? The really big beard might mean either, because exaggeration-for-effect could SAY either. And, maybe THAT is the point, insofar as, uh, fooling the people, and so on. So, is it real, AI, or memorex?—sorry, the memorex thing is memory. OK. Meaning is John Messerley’s baileywick. My tablet did not recognize Tolstoy, installing Bolshoi, or Bolshevik instead. Must have been the Russian Connection.
Or, something…
Had context been different, I might have thought I was looking at a charachiiture of the late Daniel C. Dennett. So, one must be attentive, right? One must also know how to spell. Nineteenth century soft-ware was better with that, seemED to me. Yeah, charachiiture is probably wrong—I don’t have a dictionary anymore.
This is deep stuff, depending on which end of the pool one is approaching. For example one might observe Dali’s melting clocks and wonder if he thought about Einstein and relativity…if, in fact, Dali could grasp the physics and mathematics that underpinned the notion. Or, if the artist was making a pictoral wisecrack, or, just having a good time, knowing some philosopher would be thinking about it too. Those pragmatic issues you mention could include whether artists and illustrators lose livlihoods because sponsors choose to engage with AI proponents’ creations in order to save money and/or time. That would be pragmatic for some; not so much for unemployed artists and illustrators. Good work, Professor. Always engaging and thought-provoking.