AI Art- An In-Depth Look As To Why Artists Hate It

Shiloh Connor
5 min readJan 6, 2023


I want to establish something from the get-go. This article is going to be arguing from a moral standpoint, as all my thought pieces do. I will not be arguing from a position of intellectual property law, or logistics- but from a position of ethics. And that may read as mean or judgemental.

But I ask you to hold space with me, read through to the end, and then form an opinion.

Photo by Michael Dziedzic on Unsplash

I want to start with this- Artificial Intelligence is a tool. Learning Algorithms are a tool. And tools are, for the most part, neutral- depending on how they are designed. The concept of AI art in and of itself is not the evil here. The tool is not the enemy, it’s the people wielding it. So before this becomes an argument about that, I want to address the assertion. An AI cannot exist without a coder to make it, so the problems of the AI are the responsibility of its crafters.

So, what is AI art? And why are artists so mad about it?

Lauren Duplessis from Domestika explains AI art as the following-

In short, it is artwork (visual, audio, or otherwise) generated by a machine learning process — that is, a machine has “learned” some information, and used it to generate a new image. Humans may have collected the data, or written instructions for the machine to use, but the process of creation is left to the machine.

In basic terms, it’s a coded algorithm that harvests data, sorts it, and slowly gets better at analyzing the harvested data the more it collects. These Learning Algorithms have been used for all sorts of things- those who remember the Tumblr porn ban will remember well how easily the censorbot was fooled, and how often it just.. didn’t work.

The Tumblr censorbot was an open-source code taken from the internet and utilized by Yahoo-Era Tumblr’s staff to find and target smut, pornbots, and child abuse material. But because it was maybe three lines of code maximum, it was barely functional and would false flag fossils, sand dunes, and pictures of Spiderman.

No, that’s not a joke. Those are all real things that got flagged.

From that incident we learned one thing- the effectiveness of an AI is determined by it’s creator, and the code it is built from.

So then.. what are AI Art Bots built from?

Simple- Stable Diffusion.

Stable Diffusion (SD) is a text-to-image generative AI model that was launched in 2022 by Stability AI, a UK-based company that builds open AI tools.

Stable Diffusion generates images in seconds conditioned on text descriptions, which are known as prompts. It is not only limited to image generations but also does tasks, such as inpainting, outpainting, and image-to-image generations guided by prompts.

Since Stable Diffusion is a deep learning model, it is trained on billions of text-image pairs to generate images from mere text.

The technology is cool! You feed text descriptions and images in pairs into the algorithm, and it learns how to create images with each addition and layer of learning. That seems like an amazing way to learn about art and technology, right? That kind of tool could be used to help artists find inspiration for new subjects, or for AI experimentation to learn how ti improve these kind of engines. Education, furthering of sciences, so many options and all of them exciting to techies.

But it’s not being used that way- and that’s the problem.

Photo by Hitesh Choudhary on Unsplash

The fact that folks that are using Stable Diffusion algorithms feed the works of others without their consent is the root of the problem, beginning to end. It’s not the medium of AI art, or the bots themselves, it’s the disrespect and selfishness of these programmers and handlers.

Imagine finding out that your works were used by a tech company to teach their bot. Pulling up the website, and seeing the programmers who made the AI’s code getting all the credit- but no list of artists whose works were used to teach it.

They didn’t ask to use your work, or honor your contribution in any way.

Why does that hurt? Why does it offend?

Let’s go back to how Stable Diffusion works. Without data input to learn from, the code cannot produce images. Without images and text to teach the learning machine, it’s just a few lines of code and some graphics. It needs input to output. And so, every artist whose work is used has contributed to the bot being functional.

It’s like forcing someone to voice act for a cartoon, and not crediting them. The final product wouldn’t be the same without them, their labor is vital to the final product, but they were given no choice and no acknowledgement.

Does that not seem… shitty?

Art Theft scandals are incredibly common. DeviantART has had more than a few, as has the now defunct platform Art4Love, the Facebook RPG Castle Age- and these are just a few small examples. So the fact artists don’t like their work being used without their permission shouldn’t be a surprise. We’ve had this conversation so many times before. It isn’t any different just because the thieves are programmers.

Our autonomy and dignity are the core of the issue. That is the long and short of it. But that doesn’t mean we hate the people that use these bots, not do we hate you. We’re just asking you to stop using them. To boost the commissions and works of human artists, to help protect our labor and our contributions to society from being used without our permission.

Support human artists. That’s all we ask.



Shiloh Connor

Freelance Artist, Writer, and Activist looking to start a conversation!