Only if you're using a very particular and honestly circular-sounding definition of "good".
Some deviations are more important than others, even if you're looking at deviations that take the same amount of data to correct.
Think about film grain. Some codecs can characterize it, remove it when compressing, and then synthesize new visually matching grain when decompressing.
Let's say it takes a billion bytes to turn the lossy version back into the lossless version.
The version with synthetic film grain still needs a billion bytes or maybe even slightly more bytes, even if the synthetic grain is 95% as good as real grain. The cost to turn it lossless is the wrong metric.
Only if you're using a very particular and honestly circular-sounding definition of "good".
Some deviations are more important than others, even if you're looking at deviations that take the same amount of data to correct.
Think about film grain. Some codecs can characterize it, remove it when compressing, and then synthesize new visually matching grain when decompressing.
Let's say it takes a billion bytes to turn the lossy version back into the lossless version.
The version with synthetic film grain still needs a billion bytes or maybe even slightly more bytes, even if the synthetic grain is 95% as good as real grain. The cost to turn it lossless is the wrong metric.