Nevertheless personalized recommendations doesn't avoid the issue that the parent was talking about. Say based on my personal history, Foursquare figures out that I love Pizza.
Say I am in Denver, CO tomorrow twenty feet away from two Pizza places, both of which have equal ratings but only one "real": Let us assume that both don't pay
Foursquare any money but one is a false positive due to differences in personal taste, artificial pumping of results etc. An algorithm is not going to obviate the issue of making a wrong recommendation.
Imagine if they could build up a ratings profile of people who rate the same place similarly, and then network that out... for instance:
Person 1 likes A and B, dislikes C, and hasn't been to D
Person 2 dislikes A, likes B, and C, hasn't been to D
Person 3 likes B, C, and D, and hasn't been to A
Person 4 likes A and D, hasn't been to B or C
So, A has 2 likes, 1 dislike , B has 3 likes, C has 2 like, 1 dislike, and D has 1 like.
That's the start of a rating scale.
But what if an algorithm could identify that, say, Person 1 and Person 4 have similar tastes... so it could recommend D to 1, and B to 4. It can also see that 2 and 3 have similarity, and recommend D to 2.
Now, here's where it gets a bit tricky. The algorithmn can tease out that A and C are opposites - maybe one has great food, but with bad atmosphere/service, and the other is the opposite.
Thus with that deduction, it can recommend B, but not C to 4.
In an ideal world, I would totally agree with you. However, most recommendation systems deal with three issues:
1. Sparsity of data. People surprisingly rate much less than you think they would. In fact, negative ratings are way less sparse than positive ratings (This to me is unintuitive because this is not how I would act but it is what it is).
2. Lack of features for similarity computation. Sometimes, the rating matrix is all you have to compute similarities or you have crappy metadata. You may turn out to be lucky and pull down a facebook open graph and have enough coverage to work with, it depends on your model.
3. The problem of high variance due to latent features (which you alluded to in the last part): Your model gets harder to track due to in sufficient information as to why a place is good or bad. Maybe, there is a correlation between seasonal variations and special cuisines, maybe they had a shitty chef that one time Person 4 came there.
I am not saying it is not do-able, I am just saying it is hard and sometimes ML fairy dust is not enough. :)