> a lot of which is motivated by a sense of unfairness
This is not something I've seen once in any sort of criticism of "AI art", and elsewhere in the internet I'm largely in a anti-ai-art bubble.
Most legitimate pushback I've seen has been more on the non-consensual training of models. Many artists don't want their work to be sucked up into the "AI Borg Model" and then regurgitated by someone else, removing the artists consent, credit, and compensation.
I found it rare that those dead-set against AI art actually concede that it has value after you take copyright out of the equation, bringing up Adobe Firefly instead pivots the conversation to other, considerably weaker arguments.
Using stock art is just further appropriation, which is silly considering the intent and licensing of stock artwork is clearly intended by all parties to turn works into commodities for commercial exploitation.
The old ways are best, the new ways are bad and take away the soul from the creation process and resulting works. Also unconvincing considering that most of the people saying that are using radically different, digitized, heavily time optimized art workflows compared to norm of the industry even 30 years ago.
Not that I don't see the problems, the potential for job losses due to the optimizations to workflow requiring less work and therefore less workers is an actual risk, but one that happens regardless of copyright enforcement of AI models. The problems commercialized AI art workflows cause may even be exacerbated by enforcement of copyright on training data by handing a monopoly of all higher quality generative AI models into already entrenched multinational intellectual property rightsholders hands. I think a lot of artists forget that copyright isn't as much for them as it's for the Disneys of the world.
I don't think there's much wrong with that though. I think the whole copyright/licensing/fair use thing is the one reasonably objective problem with "AI art" at the moment. People might have other concerns, but once you solve the copyright issues, it starts to be a whole lot more personal subjective preferences.
I absolutely have seen it. A lot. It's dressed up as Luddism, more often expressed as "you shouldn't be able to have those results because I spent years honing my craft" which may or may not be followed by "...and if we allow this, those years were wasted and I'm out of a job, along with millions of others".
Luddites were a real group, who really did lose their jobs to technical progress. So it seems fair.
Technical progress does really require adaption sometimes. We don't criticize luddites because we think they were wrong about change being a real thing.
Why would anyone have to articulate that? They are programs that allow people to make the pictures they have in their minds into a form that others can now look at. People who otherwise wouldn't be able to make these pictures before (because they were bad at drawing or whatever) now can. That's not "necessary" but then again nothing about art really is. It's just fun.
You are absolutely correct. The reaction from artists is sheer terror disguised as a dialogue about whether a machine can learn to make art by looking at other peoples' works just as another human would learn.
I see it as a slow transition though; there's still plenty a human artist can offer over any current model even with a carefully curated prompt. But yes, eventually the whole industry will die down. Especially seeing as models can now generate sounds, 3d models, textures, natural placement of objects on a map, etc. Like everything we've invented a tool to help us do things faster and it will displace people. Tough to say whether it's right or wrong but it's what we've done all through history; move on from one technology to the next. I wonder if traditional artists complained/fretted when digital art/tablets were getting big, hmm.
SD base models can't really be used to imitate style of other artists reliably, because the datasets that they were trained on are a huge mess. Caption accuracy is all over the place. For example - Joan Cornella's work and Cyanide & Happiness comics are in LAION5B, but if you prompt SD to make art in their style you'll get something completely different. Try prompting for a "minigun" - you will also get something weird.
In order to copy style from other artists reliably - you have to make a LoRA yourself. That involves a lot of manual work and it can't really be automated if you want good results.
Artists can opt out of future SD base models (which doesn't matter), but they can't opt out of someone making a LoRA of their work (which actually works).
>> a lot of which is motivated by a sense of unfairness
> This is not something I've seen once in any sort of criticism of "AI art"
I've actually seen this a lot.
In my view, it's not coming from professional artists working in the field. Their concern is more that people are ripping off their style, or that AI is making their efforts unnecessary (e.g. lots of people who made a living by copying the style of particular anime & cartoons for fans, no longer have a purpose since AI can do that given enough source material).
Non-professional artists, on the other hand, are still learning and have put a lot of time into their craft and it hasn't paid off yet. They seem to be annoyed that other people are getting results (via AI), without actually having to learn the mechanics of art.
AI basically lets your generic art history major produce lots and lots of pieces, because they can describe artwork well enough and know where to find good samples for the AI. The only thing stopping them was mere mechanical inability, not knowledge of the art space.
Is this part actually coming from artists? What’s the suggested amount(be it upper quadrillion dollars per second or $0.25/use)?
I think compensation as a condition is only assumed implied, that financial gains are artists’ motives and they actually live off that income. Rather, I see a lot of vocal oppositions to AI image generators that aren’t drawing for profit at all.
So, is the money going to solve it, or is it a wrong assumption, or is it that it will have to be settled by lump sums?
Yes. The group of artists that are suing Stability AI and Midjourney are calling for consent, credit, and compensation.
https://stablediffusionlitigation.com/
> Since then, we’ve heard from people all over the world—especially writers, artists, programmers, and other creators—who are concerned about AI systems being trained on vast amounts of copyrighted work with no consent, no credit, and no compensation.
I think the details of credit and compenstation aren't as important, because once you require consent, artists can decide whether they're happy with the compensation model and choose to give consent (or not) based on that.
>Most legitimate pushback I've seen has been more on the non-consensual training of models
Look at the pushback to Adobe’s model.
“Non consent of model input” is just a tool they’re using in the hopes of destroying the tech. Plenty of companies have datasets of these same people’s work where the T&C permits training.
The narrative will switch once you can no longer use the “stealing/consent” argument. They won’t suddenly become fine with this tech just because the dataset consented.
If their own work isn't hoovered up en masse, I'm sure that artists still won't be happy that they can no longer make a living from their profession. But it's a bit rich to think they're being disingenuous in objecting to their own work being co-opted to enable the process of putting them out of work.
I don’t think it’s rich at all. You saw the argument switch within minutes when Adobe Firefly launched. It’s not about the process it’s what legal and social levers can be pulled to stop the tech.
I work in the creative fields so my work will be impacted by this but I realize it’s pointless to fight it.
This is not something I've seen once in any sort of criticism of "AI art", and elsewhere in the internet I'm largely in a anti-ai-art bubble.
Most legitimate pushback I've seen has been more on the non-consensual training of models. Many artists don't want their work to be sucked up into the "AI Borg Model" and then regurgitated by someone else, removing the artists consent, credit, and compensation.