Thousands of artists are urging the auction house Christie’s to cancel a sale of art created with artificial intelligence, claiming the technology behind the works is committing “mass theft”.
The Augmented Intelligence auction has been described by Christie’s as the first AI-dedicated sale by a major auctioneer and features 20 lots with prices ranging from $10,000 to $250,000 for works by artists including Refik Anadol and the late AI art pioneer Harold Cohen.
The question about if AI art is art often fixates on some weird details that I either don’t care about or I think are based on fallacious reasoning. Like, I don’t like AI art as a concept and I think it’s going to often be bad art (I’ll get into that later), but some of the arguments I see are centered in this strangely essentialist idea that AI art is worse because of an inherent lack of humanity as a central and undifferentiated concept. That it lacks an essential spark that makes it into art. I’m a materialist, I think it’s totally possible for a completely inhuman machine to make something deeply stirring and beautiful- the current trends are unlikely to reliably do that, but I don’t think there’s something magic about humans that means they have a monopoly on beauty, creativity or art.
However, I think a lot of AI art is going to end up being bad. This is especially true of corporate art, and less so for individuals (especially those who already have an art background). Part of the problem is that AI art will always lack the intense level of intentionality that human-made art has, simply by the way it’s currently constructed. A probabilistic algorithm that’s correlating words to shapes will always lack the kind of intention in small detail that a human artist making the same piece has, because there’s no reason for the small details other than either probabilistic weight or random element. I can look at a painting someone made and ask them why they picked the colors they did. I can ask why they chose the lighting, the angle, the individual elements. I can ask them why they decided to use certain techniques and not others, I can ask them about movements that they were trying to draw inspiration from or emotions they were trying to communicate.
The reasons are personal and build on the beauty of art as a tool for communication in a deep, emotional and intimate way. A piece of AI art using the current technology can’t have that, not because of some essential nature, but just because of how it works. The lighting exists as it does because it is the most common way to light things with that prompt. The colors are the most likely colors for the prompt. The facial expressions are the most common ones for that prompt. The prompt is the only thing that really derives from human intention, the only thing you can really ask about, because asking, “Hey, why did you make the shoes in this blue? Is it about the modern movement towards dull, uninteresting colors in interior decoration, because they contrast a lot with the way the rest of the scene is set up,” will only ever give you the fact that the algorithm chose that.
Sure, you can make the prompts more and more detailed to pack more and more intention in there, but there are small, individual elements of visual art that you can’t dictate by writing even to a human artist. The intentionality lost means a loss of the emotional connection. It means that instead of someone speaking to you, the only thing you can reliably read from AI art is what you are like. It’s only what you think.
I’m not a visual artist, but I am a writer, and I have similar problems with LLMs as writing tools because of it. When I do proper writing, I put so much effort and focus into individual word choices. The way I phrase things transforms the meaning and impact of sentences, the same information can be conveyed so many ways to completely different focus and intended mood.
A LLM prompt can’t convey that level of intentionality, because if it did, you would just be writing it directly.
I don’t think this makes AI art (or AI writing) inherently immoral, but I do think it means it’s often going to be worse as an effective tool of deep, emotional connection.
I think AI art/writing is bad because of capitalism, which isn’t an inherent factor. If we lived in fully-automated gay luxury space communism, I would have already spent years training an LLM as a next-generation oracle for tabletop-roleplaying games I like. They’re great for things like that, but alas, giving them money is potentially funding the recession of arts as a profession.
Photography, as opposed to painting, can’t either. Part of the art of photography is dealing with the fact that you cannot control certain things. And yes a complete noob can get absolutely lucky and generate something absolutely stunning and meaningful by accident.
Personally I vibe much more with definitions of art that revolve around author intentionality on the one side, and impact on the human mind on the other and AIs, so far, don’t have intentionality neither can they appreciate human psychology or perception so there’s really no such thing as “AI art” it’s “Humans employing AI as a tool, just as they employ brushes and cameras”, and the question of whether a piece created with help of AI is art or craft or slop or any combination of those is up to the human factor, no different than if you used some other tool.
So in my mind that auction is just as valid as one that focuses on photography. There’s a gazillion photographs made daily that aren’t art, and those don’t get auctions, just as deluges of stuff that AI generated doesn’t make it to that point. It’s still about that “special something” and being a materialist doesn’t mean you need to reject it: It is recognised by a very material computer right there in your head. It’s hard to pin down, yes, if it was easy to pin down it wouldn’t be art but craft.
The university I went to had an unusually large art department for the state it was in, most likely because due to a ridiculous chain of events and it’s unique history, it didn’t have any sports teams at all.
I spent a lot of time there, because I had (and made) a lot of friends with the art students and enjoyed the company of weird, creative people. It was fun and beautiful and had a profound effect on how I look at art, craft and the people who make it.
I mention this because I totally disagree with you on the subject of photography. It’s incredibly intentional in an entirely distinct but fundamentally related way, since you lack control over so many aspects of it- the things you can choose become all the more significant, personal and meaningful. I remember people comparing generative art and photography and it’s really… Aggravating, honestly.
The photography student I knew did a whole project as part of her final year that was a display of nude figures that did a lot of work with background, lighting, dramatic shadow and use of color, angle and deeply considered compositions. It’s a lot of work!
I don’t mean here to imply you’re disparaging photography in any way, or that you don’t know enough about it. I can’t know that, so I’m just sharing my feelings about the subject and art form.
A lot of generative art has very similar lighting and positioning because it’s drawing on stock photographs which have a very standardized format. I think there’s a lot of different between that and the work someone who does photography as an art has to consider. Many of the people using generative art as tools lack the background skills that would allow them to use them properly as tools. Without that, it’s hard to identify what makes a piece of visual art not work, or what needs to be changed to convey a mood or idea.
In an ideal world, there would be no concern for loss of employment because no one would have to work to live. In that world, these tools would be a wonderful addition to the panoply of artistic implements modern artists enjoy.
And that’s not precisely the same for AI… why? Why are the limited choices in photography significant, personal, and meaningful, but not the limited choices people make when generating pictures?
Yes. Because the majority of stuff is generated without much intentionality, by amateurs, or both – but so are most photographs, they just don’t ever even get analysed in the context of being art or not because their purpose is to be external memory, not art. And arguably most AI generated stuff should not get analysed in the context of being art.
But that doesn’t mean that you can’t control lightning, or that someone who does have a sufficiently deep understanding both of the medium of pictures in general, as well as the tool that is AI, would not, at some point, look at what’s on the screen and ask themselves “Do I want different lightning”. Maybe you do, Maybe you don’t. Like, there’s a reason there’s standard lightning setups, not every work has to be intentional about that particular aspect.
And maybe you want different lighting but the model you use doesn’t provide that kind of flexibility – when you say “still life” it insists on three-point lighting because it thinks one implies the other just as “mug” implies “handle”. You can then go ahead and teach it about different lighting setups, “this is an example of backlight, this of frontlight, this is three-point”, and, with some skill and effort, voila, now “still life with backlighting” works. There absolutely is intent in that. Speaking of models that can do that, here’s usage instructions for one that does.
You make a compelling and very interesting point here. I’m still l considering it, because it’s provoked a lot of thought for me. Once I feel like I can definitely make an argument against or in favor of your point, I’ll get back to you.
Well done, I love intelligent discussions like this so much, I really missed them when my online communities started decaying. The pursuit of truth is so much fun!
Hey, thank you so much for your contribution to this discussion. You presented me a really challenging thought and I have appreciated grappling with it for a few days. I think you’ve really shifted some bits of my perspective, and I think I understand now.
I think there’s an ambiguity in my initial post here, and I wanted to check which of the following is the thing you read from it:
Both are in there, and neither of those are wrong. Generative AI does have serious limitations when it comes to detail control, and it’s also used a lot by people (not necessarily executives) who don’t respect or understand art – even to create things that they then consider art.
The thing is that we’ve had the same discussion back when photography became a thing. Ultimately what it did was free the art of painting from the shackles of having to do portraits.
One additional thing is that I recommend extremely against trying to try and develop art skills by generating AI. Buy pencil and paper, buy a graphics tablet, open Krita or Blender, go through a couple of tutorials for a few days you’ll have learned more about what you need to know to judge AI output than what hitting generate could teach you in a year. How do I know that the eyes in that AI painting have an off-kilter perspective? Because, for the life of me, I can’t draw them straight either, but put enough hours into drawing to look at both the big picture and minute detail. One of the reasons I switched to sculpting.
It’s not any of those reasons, it’s because it can only exist by being trained on human authored art and in many cases you can extract a decentish copy of the original if you can specify enough tags that piece was labelled with.
The ai model is a lossy database of art and using them to launder copyright violations should be illegal, is immorally stealing from the creator, and chills future artists by taking away the revenue they need while learning. This leads to ai model art having not enough future work to train on and the stagnation of the human experience as making beautiful things is not profitable enough, or doesn’t give the profit to those with power.
I did close my post by saying capitalism is responsible for the problems, so I think we’re on the same page about why it’s unethical to engage with AI art.
I am interested in engaging in a discourse not about that (I am very firmly against the proliferation of AI because of the many and varied bad social implications), but I am interested in working on building better arguments against it.
I have seen multiple people across the web making the argument that AI art is bad not just because of the fact that it will put artists out of work, but because the product is, itself, lacking in some vital and unnameable human spark or soul. Which is a bad argument, since it means the argument becomes about esoteric philosophy and not the practical argument that if we do nothing art stops being professionally viable, killing many people and also crushing something beautiful and wonderful about life forever.
Rich people ruin everything, is what I want the argument to be.
So I’m really glad you’re making that argument! Thanks, honestly, it’s great to see it!
The soul thing is very poor ground to argue on yes which is why I immediately spent the time to make a different one :)
At very best it’s an intuitive understanding of “procedural oatmeal” where the brain spots patterns in the output so quickly it becomes tired of the art and loses interest.
But I think that’s being generous and I think of lot of the time it’s a purely to stake a position based on identity and a challenge to that identity.
Of course! I didn’t mean to suggest you are arguing about the soul thing. I’m sorry if that’s the impression I created, since you’ve been very clearly arguing on a very different tract that I firmly agree with.
Oh dear no I’m repying to agree. It is all good.
It’s a lazy Sunday and while I have a dozen better things to do trying to make clear posts about ai in a place where people will agree intelligently is a nice waste of time.
using an LLM doesn’t take money from artists
All right, I don’t want to dismiss how you feel or anything but so many people said this that they did experiments to see and it turns out that nah, overall, people thought mostly that the robot art was more human, and the effect comes from the knowledge of the painter. All things equal, emotional connections happen just as much (if not more) with generative art. That doesn’t surprise me honestly, it’s mimicking humans. And the rating of how likely it is to do so has guided it to the end product, so somehow, the humanity is embedded. It’s not something that feels great as I am an artist myself, but I accept science on this one.
This makes sense, but I always feel “tricked” if I don’t notice I’m reading or looking at generated stuff until after a tic.
Definitely. It’s maybe also the taint of the megacorps that train them to then put sadistic system prompts into them before training it on the public
I’m not sure I understand your overall point here. It sounds like you’re saying that the perceived emotional connections in art are simply the result of the viewer projecting emotions onto the piece, is that correct?