Completely true, but copyright is not a "right" in the sense of human rights, it's a legal construct that we created to create certain social benefits. And it certainly wasn't my impression that most HN users view the current state of copyright law as an unmitigated positive force in the world.
Except you were actually trained, while the AI applied some form of computation that we (perhaps opportunistically) call "training" but it isn't really the same thing. You can't win in court just by naming things the same.
> Aren't they afraid some court might, at some point, force them to pay each artist back a fee for each generated image?
I'd say they're banking on the horse having bolted by the time such a thing might happen (i.e. courts would need to force 1000s of very large powerful companies to pay millions of people - an insurmountable legal effort).
"I am no longer able to sell my product in the market since it has been commoditized to the benefit if everyone else" is not the same as "I have been robbed".
From the start OpenAI started with semantically overloading the "AI is an existential" risk argument from "AI is going to make me starve" to "AI is going to go rogue"
I don't really see AI as an existential risk (nor something that'll single-handedly starve people nor "go rogue" - unless one defines that as something like "having CVEs").
This is less about AI, per se, and more about corporate -vs- personal IP rights. Historically, IP law has bent to benefit large corporations while citing personal IP rights as the raison d'être (& never fulfilling on that justification). What OpenAI (& many others) are doing here is just very flagrantly demonstrating how that justification was only ever an excuse, & that restrictions imposed by the Berne Convention, et al, have never really applied to corporations at scale (outside of small case-by-case exceptional examples).
The livelihoods being stolen are not being stolen by AI - rather it's a further reinforcement (scaling up) of a system that has been doing so for well over 100 years.
> again trained (without permission) on copyrighted work
So far there is no solid proof of that. They didn't disclose the sources or the methodology. Except for 'trained' and 'copyrighted' the rest is questionable. Otherwise they would be already paying royalties.
They could have used the output from prev version 2 with prompts generated by GPT, and then corrected by humans based on the produced image. Also they could use CV to analyze new/old images. I.e. if there is a new feature in the image add it to prompt and train again.
It’s absolutely insane this is “allowed” under regular copyright, and now going into mass-consumer commercial products. Given the state of courts, I don’t have any hopes this will be reversed.
They are certainly striking under-the-table deals with big IP holders like Disney to not poke the bears, but leave all smaller actors defenseless (or rather penniless, more so than they already are).
Sam Altman has already empirically proven himself to be rich enough to be above the courts with the whole WorldCoin thing, why should he assume it would suddenly be different now?
There’s probably an argument against my point, but this sure ain’t it. I can watch every Disney movie for free on the internet, but that doesn’t make it legal.
What's illegal here is unauthorized distribution. Downloading a copy of something from a public server cannot get you in trouble here. Only publishing does.
It's not and never was. In 2008 it even got raised from a civil to a criminal offense. It's a common misconception because it is more lucrative to go after people that also upload/publish due to higher possible damages.
> Creators can now also opt their images out from training of our future image generation models.
So this version was again trained (without permission) on copyrighted work. And they try to shift the burden onto artists to manually opt out.
Aren't they afraid some court might, at some point, force them to pay each artist back a fee for each generated image?