Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fakespot – measure the legitimacy of reviews on popular sites (fakespot.com)
143 points by throwaway13337 on Dec 30, 2017 | hide | past | favorite | 84 comments


This just isn't reliable. I tested in on the Amazon Echo 2nd Gen, it gave it a D grade, 45.8% low quality reviews, half a star.

The two unreliable reviews I checked were two-star reviews, they were not high quality reviews, but they weren't "fake". For example one complained that they thought it was wireless, but it requires a power cord.

I checked the product's two star reviews (6% of reviews), the reviews I saw were all legit. The were long reviews, mostly complaining about the sound quality compared to the 1st gen. For reference, the top 5-star review also says sound quality was poor, until after the third firmware update.

Amazon isn't paying for fake reviews of the Echo, and no one would pay someone to purchase a $79 product (verified purchase) and give it a two-star review.


tried couple of products... seems to be working great. For example, this is spot on:

https://www.fakespot.com/product/hanger-hooks-nuolux-hanger-...


Is it flagging Vine reviewers? That would make sense... Could just be unreliable though.


I use Fakespot religiously since 2016 and I noticed the same thing so I emailed them asking them, why would Amazons' products have fake reviews? This was their reply (from co-founder, Ming):

"We have noticed trends where fake review clubs/bots seed their reviewer profiles by leaving reviews on various products, specifically Amazon best selling products to create a profile that looks legitimate (it is done to fool anti-fraud systems that Amazon utilizes). Our engine detects these seed reviewer profiles and therefore penalizes them accordingly by examining their profile and our AI algorithm (one of numerous) was trained to distinguish these patterns."

I hope this is helpful. I love the website and I think it is not to the benefit of Amazon, Yelp, or TripAdvisor to do anything about fake reviews as it only helps their platforms generate more $.


Most people don't realize this but lot of products I'm seeing these days are priced almost 20% more on Amazon than many other places. For example, Nest was on sell on Amazon about $100 more than in Costco during Christmas period. Amazon has successfully trained people in to thinking that everything there is lowest priced and you just need to click button to get the goods. Further more, many of the items now are actually fake even when branded. In many categories like cables, batteries, cameras - you are actually very likely to buy either Chinese branded or fake branded items from Amazon then the real deal. For example, virtually 1 out 4 product listing on Duracell are actually fake Duracell. It will still say "Sold by Duracell" and available on Amazon prime to make it completely legit but its fake supplied from Chinese seller. The funniest thing is that during Black Friday, many of these product prices went up while Amazon advertized sale on the site, truly cashing out the trust they have built over the years.

Amazon is becoming jungle in truest sense where your money would be eaten away if you are not careful. Bezo keeps talking about "customer obsession" but he seems to complete ignore all of the above.


I remember a few years back there was a post on HN about algorithmic price changes of books on Amazon. I wonder if this was broadened to other products?


They do it on a bunch of their products - maybe all of them, I'm not sure.

You can see pricing history (as well as set yourself price alerts) at a couple of different websites such as https://camelcamelcamel.com/


I wonder how they address this problem, which affects all attempts to spot fake reviews: How do they know how accurate their results are?[0] What are the false negative and false positive rates? To measure your accuracy objectively, you'd need an independent method to verify, with high reliability, the legitimacy of the reviews. (You may think you can spot them manually, and maybe you can spot some obvious ones, but false negatives are perilous: You don't know if the reviewer is simply better at hiding their trick than you are in discovering it - and they likely have far more expertise and experience than you do. Consider: Could you write a fake review that would trick someone like yourself?)

Fakespot's FAQ says the following. I think the use of machine learning is a good example of the problem: The machine needs a good data set from which to learn, which means they'd have to flag reviews in the training data as legitimate or not - but they have no way to determine that. Basically, they are training the machine to make the same judgments, of unknown, accuracy, as the humans (in fairness, I'm making some presumptions about their use of machine learning).

What criteria are used by Fakespot when analyzing reviews?

Fakespot utilizes numerous technologies to validate the authenticity of reviews.

The primary criteria is the language utilized by the reviewer, the profile of the reviewer,correlation with other reviewers data and machine learning algorithm that focuses on improving itself by detecting fraudulent reviews.

The technologies include: profile clusters, sentiment analysis, cluster correlation and artificial intelligence intertwined with these functionalities.

https://www.fakespot.com/faq

[0] As Richard Feynman said, "The first principle is that you must not fool yourself -- and you are the easiest person to fool."


Statistics.... No, absolute ground truth is impossible for this problem. There has been some research that estimated the percentage of fake / spam reviews at 2 - 6% but that was from a few years ago. I'm sure the percentage has increased greatly since then [great, another dissertation... My MSc dissertation was on this topic]

Text-only based features are not fabulous, just useful (like 75% correct, using a dataset where ground truth is known). So there is / can be linguistic differences in the writing that indicate the writer is not 'truthful'. Sentiment analysis however, by itself, is abysmal - and the behavior of spammers has changed over the years so they are more knowledgeable how to craft reviews.

But other signals, like relationships to other reviewers, IP addresses, submission time of review... they have been shown to be more accurate - but not in the 90+%. Establishing who is in a spamming group seems to be reliable though, so once that is known, you can more confidently label their reviews as spam [but not 100%, of course, to be fair and objective]

I guess Fakespot takes a stab at estimating correctness and hopes the false negative rate is acceptably low. Yelp OTOH cranks things up so the false positive rate is high....


Thanks; that kind of contribution is why I read HN.

> using a dataset where ground truth is known

How is this dataset created?


I had the same reaction as yours. I clicked many links on the website, only to find such fluffy statements about machine learning algorithms, but no information whatsoever about the metrics and ground truth. They basically dodge the most important questions and therefore I conclude that they are not in any way trustworthy.


My knee-jerk reaction is that they are detecting anomalies and low-quality reviews for some definition of quality.

As you pointed out, without a consistent ground truth target how could it really separate "fake" from "legit"? Perhaps they partnered with a fake review business and used their records?


I really can't stand this website.

We had them say ~40% of our Amazon reviews were fake/unreliable... and I'm pretty damn sure we'd know if we were faking our own reviews.

I don't know how their system works, but the false positive rate in my experience is absurd.


Their long-term monetization strategy probably involves you paying them to ”verify” your reviews.


who's we? and looking at the website I think it used NLP to analyse review text and they state in about us that they check grammar and correlation so you could have a legitimate review with wrong grammar marked as spam.


Care to share a link so we can take a look?


Same here, for my all-organic ebook reviews it gave an F and said there was "HIGH DECEPTION" involved. It seems to penalize you for having above four stars.


Maybe I'm naive but isn't this something Amazon, for example, should be doing? Why aren't they concerned about the integrity of their community?


Because Amazon needs to tread carefully. Lot of people depend on Amazon for their bussiness.

This site has a very low bar for showing false positives. It can mark majority reviews as fake without consequences. If Amazon does something similar it will be like YouTube de-monetizing videos all over again.


> Why aren't they concerned about the integrity of their community?

Because consumers aren't imposing sufficient costs on them for not being concerned to change that attitude. Amazon is interested in making money as efficiently as possible, period.


Kinda like how everyone leaves positive reviews on Airbnb because you don't want to look like a dick to future hosts. In fact, there really is no upside to leaving a negative review.

For example, "great place if you're a dog lover" = gang of intimidating street dogs on your doorstep. "Great eco-friendly washing machine" = one of those vibrating buckets. "Great place if you're a morning people" = loud train or church bells at 4am, or just next to a loud highway.

I've lived in some dumpy places in my life without issue so I'm not just some picky tourist expecting a Four Seasons experience in someone's guest house. But the reviews/ratings are pretty useless and Airbnb really has no reason to fix them.


When vacationing with our family last year we stayed in a eight different AirBnBs. We talked extensively about this and came to the same conclusion - there is no upside in leaving a negative review! There is a downside so I just don’t do it. If I have a bad experience, I just don’t leave a review.

If AirBnB would allow you to see the “review rate” of a property that would be telling. “14% of the guests of this property left a review” would mean a lot compared to “72% of the guests of this property left a review”.


Revealing the review rate for properties sounds like a very good idea. I wonder if there are any downsides.

On a related note, I was talking with an airbnb host yesterday who had just hosted some fairly messy guests that may or may not have broken something. He decided that not leaving a review was the best course of action. It seems that revealing the review rate could work both ways (for hosts and guests).


> It seems that revealing the review rate could work both ways (for hosts and guests).

Would't that put us right back in the same situation? Guests and hosts will now worry about keeping their review rates above some threshold, and feel the pressure to write some kind of review. The original pressure to not leave negative reviews still exists.

I mean, I don't know if there's a solution to this. I remember a long time ago, when I bought and sold on eBay, that I rarely gave negative feedback. Just to avoid the retaliatory feedback from the other party.


A place I stayed last year has been booked pretty much all the time since I was there (11 months). Yet, my review is the latest one published.

The place wasn't bad (by AirBnB standards) and I was surprised by this.


It seems to me you have to also reveal that for the individuals. That is "this person reviewed X of their stays."

What about the simple: "Would you recommend this to a friend?" And then a _short_ reason. I mean is "too much morning sun" a bad review? Nope! What it is, it is. Some people will love it. Some won't. Some won't care either way.

The point being in hiding the truth ABB is essentially inviting people to book stays that they are not going to enjoy.

The advantage of leaving a "bad"review is obvious: You might save someone from a shite experience. And if this were the stay'er community standard, that someone could be you.


> I mean is "too much morning sun" a bad review?

Yes, inadequate light isolation in a place rented for sleeping is a significant negative feature.


> In fact, there really is no upside to leaving a negative review.

There's a downside - you get reviewed yourself, as a guest. My wife had an OK experience with airbnb, then a shitty one. airbnb did nothing helpful, and the host threatened my wife with a negative review if she left one. The fact that we had documented evidence of things like

* a completely broken shower

* missing curtains in the living area, exposing your entire room to an office block across the street, with no way to walk between bathroom and bedroom without being seen by office workers day and night

* broken television

* moldy refrigerator

* and - as trivial as it sounds - 1 hanger in a closet for someone who booked a 10 day stay

airbnb hung her out to dry, no refund, no partial refund. "it must have been acceptable because you stayed". There was no guarantee of getting any refund during that time, and we were already out over $2k for just the accommodations. Horrible horrible experience, and I will not be giving them our business again any time soon.

Again, the host threatened to leave a "negative guest" review. Given that my wife would only have that one review (despite a previous airbnb stay), this has more of a damaging effect on her ability to use the service in the future than it does on the host with dozens of "fabulous/great place!" crap reviews. She declined to leave a negative review out of some odd fear, against my suggestion, but what's done is done. The airbnb team makes it pretty clear how they resolve issues, and it was not even close to being fair, from my perspective.

Of course, looking at things "objectively", yes, my wife could have come in and broken the shower, added mold to the refrigerator, broken the television and stolen the curtains just to try to wrangle a few bucks from the host. There is always that to consider, I suppose.


The reviews are kept secret until both parties submit. The host will not know what you gave him, you should definitely have put a negative review.

Some countries and cities don't have curtains. A TV service and hangers don't qualify as a valid reason to get $2k refunded, or even half of that.


Pictures of the room with curtains were posted, then arriving to a room without them, with attendant privacy issues... false advertising.

We'd already been told that we'd get a negative review if she left anything, as we'd already tried to go through the 'dispute' process with airbnb.

Claiming a working TV and showing pictures of the main living area with curtains, then having neither is, at very least, deceptive. A broken shower (with video proof) apparently isn't enough to justify any dispute at all, because it was "fixed" on the 4th day. I would think, would be an admission that there was an actual problem that needed attention, and some sort of adjustment would be appreciated/welcomed.

Hangers - totally minor, but having emailed the 'hosts' earlier, they'd indicated this would be something provided, and it wasn't. I don't know how many things need to be a) promised in writing or in photos then b) not delivered before some notion of "goodwill gesture" becomes appropriate (10% off future airbnb booking? 10% refund/adjustment for poor service?) Apparently we were not even close to the threshold, and if that doesn't qualify, I have no good reason to consider airbnb for future travel needs.


An habitation without hot water doesn't qualify as liveable in first world countries. You should have disputed only on that, leave on the first day and ask for full refund.

If possible and necessary, deny the charge on your card.

I've been on both sides and seen all sort of claims. Disputes are always messy, AirBnb seem to read them and try to deliberate.


'Given that my wife would only have that one review this has more of a damaging effect on her ability to use the service in the future than it does on the host with dozens of "fabulous/great place!"'

I have had to trash an established account with 16 great profile reviews because an agent acting for a host left a vindictive review for me.

Creating a new account takes a few minutes and I found no downside to getting places without any review history.

AirBnB isn't Facebook, people don't think it's funny that you don't have an established account.

Conversely, it doesn't work the other way around as I would be wary of booking a place with zero reviews and, I expect, most other people think the same?

Oddly though, the only place I have stayed with zero reviews (limited number of options at the time) turned out to be the best place so far?


We left an Airbnb unit mid-stay due to ongoing construction, no warm/hot water in the shower, and general filth. The host agreed to let us out of the reservation so long as we didn't leave too bad of a review. Because the existing reviews were all rosy ("can't wait to stay here again!"), we fully intended to leave an honest review so that future guests would be forewarned. Unfortunately, the Airbnb system wouldn't let us leave a review, even right after we left the unit.


Yes. But one person's trash is another's gold. None of your example reviews are objectively negative. They are simply facts to be interpreted within the context of the receiver.

That said, the lack of true clarity and transparency feels risky (to the brand). That is, if the reviews are "fake" no one - not even the property owners - can trust them. On one hand you're saying ppl leave positive reviews to bump up their reputation. On the other hand, if everyone knows the reviews are bullshit, the owners aren't going to truth them either.

Are they still reviews if they have no real truth/value?


'But the reviews/ratings are pretty useless and Airbnb really has no reason to fix them.'

Very true. I have found the places with a lot of good reviews (compared to similar nearby) tend to be the ones that allow short stays and have a high number of bed spaces despite being a one bedroom apartment.

From my own experience, you don't notice much bad stuff in a couple of days the way you do after a week or longer. Especially if among a group of friends.

A similar place that doesn't allow instant booking and only accepts long stays may be a more discerning host but get far fewer reviews in return.


Disputes are resolved outside of the star system. AirBnb will basically reimburse 50% of the costs automatically if you complain.


> Why aren't they concerned about the integrity of their community?

Amazon has virtually no competition, ergo is not incentivized whatsoever to be concerned with the integrity of their community. Further, Amazon will charge you for return shipping so, again, they couldn't care less if a positively reviewed product actually sucks. Now that I think about it, the whole system is pretty brilliant because they're double dipping: charging the customer for returns but charging the vendor for warehouse space. It's the definition of a win-win.


Return shipping is free if the product is defective or the website description is inaccurate.


Most people choose to bin and forget the $10 shit they got, instead of sending back and asking for refund.


Oh, you have to pay for returns? That's not case (yet?) with Amazon.in.


Yeah, but when you have enough money, why not take the Pfizer model and let someone else do the hard work/risk/etc?

Ten-to-one a company like Amazon buys them in a year or three. Assuming that they don't pivot to the 'extortion' business model instead.


> Yeah, but when you have enough money

Imagine you are appointed the new CEO of Amazon.com. Please define "enough money," even a loose ballpark estimate.


Enough money === so much that even a wildly successful version of the product would at best increase revenue for a few percent long term. We're talking billions of dollars a year for a company like amazon


Because they have fake reviews for their own stuff: https://www.fakespot.com/product/fire-tv-stick-with-alexa-vo...


Just don't buy anything that isn't explicitly sold by Amazon.com. Or a few trusted others like AnkerDirect


This is a common misconception. As has been stated before on similar threads, Amazon comingles inventory. So if you as a third party seller have a bunch of fake widgets for sale on Amazon and anyone else (including Amazon themselves) sell the same widget (same by name) then Amazon has no control over whose inventory a buyer receivers, regardless of the Sold By label on Amazon.com

https://www.wsj.com/articles/on-amazon-pooled-merchandise-op...

Relevant paragraphs if you can't get past the paywall:

"As more third-party sellers have signed up to offer products through Amazon and use its order-fulfillment services, the Seattle-based giant has allowed many to pool their inventory with supposedly identical items supplied by other sellers—in essence commingling products from third-party merchants with those supplied directly to Amazon by the brands themselves.

In other words, a product ordered from a third-party seller may not have originated from that particular seller. If the bar code matches, any one that is on the shelf will do."


Per this discussion https://news.ycombinator.com/item?id=13040084

"FBA vendors appear to have control [...] But I still don't think the amazon customer can see if an fba vendor allows commingling"


I'm skeptical Fakespot works well. Consider this product:

> https://www.amazon.com/gp/product/B0064EKNKI

Where do they get the idea that 25% of the reviews are unreliable/deceptive? I bought it and it works just fine, which is pretty consistent with 87% of the reviews being 4+. Why should I think otherwise?


It's attempting to spot sponsored reviews, not bad products.


To quote itself, it's spotting "unreliable" and "deceptive" reviews, not merely sponsored ones. And even in the latter case: do those look like sponsored reviews?


No, but I imagine sponsored reviews aren't supposed to look like sponsored reviews.

The three it flagged as suspicious for the ethernet cable above were all very short, like "Great product!" or "A++++". I'm not sure what to think of that. I write reviews (of flashlights) as a hobby and wouldn't call something that doesn't describe how the product was tested or include any technical or performance analysis a review.

On the other hand, the average buyer doesn't usually have the knowledge to write a useful technical review and just wants to report that the product met their expectations.


> No, but I imagine sponsored reviews aren't supposed to look like sponsored reviews.

I mean, yes, but that wasn't the point. If the sponsored reviews seem to blend in with the normal reviews and and seem to be at least as accurate as normal reviews then how in the world are they "fake"? Where is the tangible distinction for me as a customer? (Why in the world would I avoid such products?)

> I write reviews (of flashlights) as a hobby

Funny you mention that! Flashlights seem like just about the worst-marketed products I've seen on Amazon, so they indeed definitely need great reviews. At the risk of going off-topic, do you have any tips on figuring out what the lumen output of a flashlight actually is (or even what LEDs they use)? I keep seeing Cree LEDs being claimed to output 2-3x as many lumens as they're even capable of.


Flashlights on Amazon are mostly horrible. Thrunite and their offshoot Wowtac are the only brands I'd buy there, because they sell direct. It's usually better to order from specialty dealers. It's often cheaper too, as they tend to do outreach on forums, /r/flashlight, etc... and offer standing coupon codes.

Visual identification is usually the best way to tell what LED a light really uses. Here's a chart with photos of a bunch of common ones: http://budgetlightforum.com/node/26665

While the lights you find on Amazon making extreme claims about output are almost certainly lying, LED manufacturer datasheets tend to be conservative about how much power LEDs can handle. Typical mains-powered fixtures don't have the sort of heatsinking and thermal paths good flashlights do, and LED manufacturers base their limits on a service life of tens of thousands of hours. You will not use your flashlight for 50,000 hours. Overdriven LEDs aren't rare, especially in enthusiast-oriented lights.

Measuring lumens is hard because you need a way to collect all the light, or at least average it, then put that average on a sensor. Commercial integrating spheres for this purpose cost thousands of dollars. It's possible to estimate using a shoebox and a smartphone though, assuming you have a light source of known output with which to calibrate it. I wrote an app for that: https://github.com/zakwilson/ceilingbounce/


That's awesome info, thank you! :)


I should also mention that a lot of reviews of flashlights found on Amazon praising things I would rate as absolute junk are probably entirely honest.

If your last flashlight used alkaline batteries and an ancient LED, or even an incandescent, anything using an LED from this decade and a Li-ion battery is going to look pretty impressive. That Li-ion cell might well be taken from a laptop battery sent in for recycling, and the charger it comes with might burn your house down, but it's still impressive to the average person.

On the other hand, I've seen a one-star review for not including a battery. It never said, or even suggested that a battery might possibly be included. My conclusion is that Amazon reviews for most kinds of products aren't useful for much more than determining if a product has an unreasonably high failure rate.


Yeah, I definitely did find these impressive for the same reasons you mention... hopefully my house won't burn down with the charger I got (it hasn't yet)!


Just make sure it's not this one: http://lygte-info.dk/review/Review%20Charger%20Bowei%20HC-10...

And ideally, see if you can find your charger tested here: http://www.lygte-info.dk/info/indexBatteriesAndChargers%20UK...

If it is the first one, break it so nobody else finds it and tries to use it, throw it away and replace it. The Xtar MC1 is the cheapest safe charger commonly sold in the US; they're about $5.


Haha, thank you for the heads up :) my charger is not on that list apparently... it's this one (which is no longer being sold; sorry for the low-res image): https://imgur.com/VpPNswX

But some of the batteries are on there with not-great reviews... I guess I might have to replace them?


Yeah, those batteries are junk. Try the Sanyo NCR18650GA from illumn.com.


You are probably correct to be skeptical. Anyone engaging this company for real business would likely question how they determined fake/not-fake for the training data, what the performance of the model is, etc.


I looked at the critical reviews and it seems that there was an uptick of fake products sent to customers. Perhaps such an uptick triggers this fakespot behavior? My observation for this and other products is that when something is shady, the reviews consist of a lot of date-clustered reviews that are positive followed by a trickle of critical reviews that seem to point out specific issues with the product.


Merely having fake products doesn't imply fake reviews though? If anything the fake reviews would seem to be more genuine.

I also don't really see any particular uptick or date-clustering like you mention. There are a total of 7 reviews that say "fake" or "counterfeit" out of 177 critical reviews, out of 1382 total reviews... and I'm not really noticing a particular clustering of good reviews preceding these the most prominent critical review [1] (haven't checked the rest).

[1] https://www.amazon.com/Tera-Grand-Ethernet-Retractable-Plays...


It might be worth clicking the "run analysis again" button if it's offered to you. On the headphones product I tried (and have purchased myself), it flagged some genuine reviews as having been created by automated accounts. Seems it thought the surname Akcicek was just random letters generated by a computer. But after running the analysis on the page again, it changed its mind about that review and decided it was now legitimate.


> But after running the analysis on the page again, it changed its mind about that review and decided it was now legitimate.

This doesn't sound very reliable. If nothing had changed on the review, what's the basis for it changing its mind?


It probably is a learning system... It got trained more. The review itself doesn't have to change. The system merely adjusted its parameters due to more exposure to new reviews.


Possibly, yeah. I was also under the misapprehension the re-analyse button came up (randomly) immediately after a review, but it seems it appears on old stored reviews.

Anyone who uses this site really needs to do a re-analysis, because on the products I just tested I'd rate its prior ability to detect fakes (reviews and reviewers) at F.

After re-analysis I'll upgrade its ability to C.

TBH, I don't understand the point. I trust my own ability to evaluate reviews more than the unknown logic of a third party website, which may have its own interests/motives influencing its reviews of reviews.


I wish this site worked well. Based on my tests it doesn't. This is a real problem without yet a solution.


Works well based on mine.


I did my MSc in Data Science dissertation on classification of fake reviews - I never looked at Fakespot in depth, but my cursory gut feeling is that ReviewMeta.com is better (their blog is helpful). The problem in general is pervasive across all review based sites, like Yelp and TripAdvisor, but as the top poster points out, Amazon benefits from being a hands-off middleman, so they are not going to bother dealing with any problems until it becomes a PR nightmare.

If anyone is interested, you can find my dissertation at: https://douglas-fraser.com/datadata/

Only text based features were used, nothing involving other types of signals like reviewer name or spamming groups. I started out with 180 million Amazon reviews from 1997 onwards... but that got too big of a project, so the dataset used was a test set used in a lot of research on this topic.

I will be expanding on the research in my blog, examining other sets of text based features I never got the time to use, and things like PCA as well.


Can't a malicious actor use this to check their own reviews before posting, until they pass the test?


How do fake reviewers get the "verified purchase" tag on Amazon? This particular analysis[1] explicitly lists "Alex Brown" as a "Unreliable Reviewer", but the Amazon page[2] says his review is a verified purchase.

Did Novopal just buy it for him?

(Also, I wish I could plug my Amazon profile into Fakespot and see what it thinks about me.)

[1]: https://www.fakespot.com/product/novopal-baby-stroller-organ...

[2]: https://smile.amazon.com/dp/B01N5J6LXT#customerReviews


The fake reviewer will buy the product first and then the company will reimburse and reward the reviewer on the side once a review is posted.


> Also, I wish I could plug my Amazon profile into Fakespot and see what it thinks about me

Although that sounds neat, and I wish I could do that too, I suspect this would be far too helpful to actors running bogus profiles to be allowed to exist.


I've heard that one way of doing this is for the seller to give the purchaser a discount coupon for 100% off


No, discounted purchases don't get you the "Verified" badge. The way sellers operate today is, they ask you to buy the product "for real" and then reimburse you (and more) via Paypal. This is against Amazon's rules but lots of sellers are doing it.

I rank around 6000th reviewer in FR and yet I constantly receive messages offering me to review products using this method.


Is there a way to report/flag a _product_ listing on Amazon? As opposed to flagging individual reviews.

I ran into a software listing that had several hundred reviews, all 5 stars, but in reality is some SEO-driven bug-infested piece of crap. I wish I could take it down somehow, because of its borderline scam nature.


There is a "Report incorrect product information" link on each product page, and someone does read what you post there. I've reported a few products where the name didn't match the picture or description, and they were fixed within a few days. Whether they would care about a less obvious problem, I don't know.


Does anyone know how well it works to just check to see if reviews have been posted in a short/bursty duration? I feel like most fake ones seem to be ones posted very recently or during a short time period, but I'm not sure if that just means I'm missing the rest of them or not.


Fake reviews are clustered per category. Take a look at cream to make you less fat, magic thing to grow your hair back, or iphone accessories, you will find that almost all the reviews are fake. Too much money to be made.


Tried on the Amazon echo dot best seller which had 4.5 stars but adjusted review score are trustworthy score 3. Thirty percent low quality reviews according to Fakespot.


Doesn't seem to support Amazon European marketplaces.

Review Meta does, and works very well.


It works on amazon UK. I think their magic algorithms only support English.


I use this site regularly its never let me down.


Let's be honest, fake reviews is a fake problem. Reviews were never supposed to work for customers' benefits. I mean how in the world shallow emotional opinions of people with unmet expectations and wasted time or with fanatic love for the brand would help anyone? It's just that when people start ignoring reviews all those corporations don't make as much profits.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: