What does this even mean. Reputable sources tends to fail the 'chinese room' experiment because you can never tell if your source is reputable or just faking being reputable (or later becomes corrupted).
Well, if we were talking about food, I'd say Gordon Ramsay, David Chang, Anthony Bourdain or The New York Times could be considered reputable, while a Yelp reviewer with a handful of reviews could be considered disreputable.
Ultimately you can boil it down to: Trust sources when they make statements that are later observed by me to be true, or are trusted by other sources that I trust. The negative feedback loop boils down to: If a source made a statement that I later found to be untrue, or extended trust to an untrustworthy source, reduce that source's trust.
The issue here is you have pre-AI reputation sources, the problem is for post-AI content generation how are you supposed to find this content in a nearly infinite ocean of 'semi-garbage'. Anthony is not making new content, and one day the other people will expire. In the meantime a million 'semi-trustable' AI sources of content with varying reach depending on how they've been promoted will take over those markets.
There are any number of particular problems here that we already know do not mesh well with how humans think. You'd start your AI source 'true', build up a following, and then slowly them into the Q-anon pit of insanity. Most people will follow your truth and fight anything that questions it.
What does this even mean. Reputable sources tends to fail the 'chinese room' experiment because you can never tell if your source is reputable or just faking being reputable (or later becomes corrupted).