On page 11… "Developer's response" section, quote:
"In contrast, the developer of OpenCart responded to
the inventory vulnerability by posting a comment—“use your brain!
its [sic] not hard to come up with a solution that does not involve
coding!”—then closed both the inventory and voucher vulnerability
issues and blocked us from responding. "
The response seems shockingly arrogant and irresponsible at first. But he is in fact right. A good reminder that people who are unfriendly and argue against public opinion are not necessarily wrong.
I don't know opencart, but I know how it is handled in companies using other onlineshops:
When the stock of an item goes negative, it is noticed by staff when they want to fulfill the order. ("Strange, this customer ordered red socks, but we don't have any").
Staff sets the item as "Cannot be delivered" in the order. Which ups the stock to zero again.
Customer order gets fulfilled with a note "Sorry, the red socks could not be delivered.".
It's not perfect, but also not a big drama.
Same with coupons. You should have them being invalidated after the order process. And if you see the same coupon code used twice, you know it is fraud.
The response is arrogant and irresponsible and not necessarily right either. This attitude is precisely the reason why most software is shit that requires constant manual babysitting. Bugs like this are no big deal until they are - even something insignificant as an item missing from the inventory can become a source of drama if you promised exactly this item to your child as a christmas present.
I just had a look at the OpenCart issue tracker and its not an isolated incident. There's a bunch of closed issues where the devs were incredibly rude to people while dismissing their concerns. Its true that in many of these cases the issue wasn't with OpenCart, but they could have just said "This issue isn't with OpenCart, its <blah>, closing." Not stuff like (and this is a real comment by a dev of OpenCart): "you are a moron! svg is not an image format!" and then when other people tried to help: "don't reply to this guy hes a scumbag". Riiiiight. Often the issues were closed without response and when the submitter asked why, they get an incredibly rude and arrogant response. I hope I never have to work with these assholes.
I reported an issue about them using SHA1 for password storage years ago. I was insulted and attacked personally, called a spammer, blocked, etc. So I washed my hands of the project, when I was considering it for a project at work. (I looked at password security first...)
It's funny, but the dev's response has since "disappeared". Still some ignorance in the thread (thinking that multiple layers of SHA1 was safe) but hopefully history has vindicated my position at that time.
Never use OpenCart for anything. There's no point, when you'll be insulted and unsupported. And don't work with the owner/contributor: https://github.com/danielkerr
I love how in the password hashing one he first says something like "ok doesn't look too hard to do X" and then shortly after "I'm closing this, its a waste of time". Wow.
Items cannot be shipped sometimes. In real life, that happens way way way more often due to other circumstances then due to race conditions. Staff breaks the last item. Manufacturer sent a box with 100 items but it only contained 99. The color of the shelf stained the last item. Etc etc etc.
Think about it: Depending on the shop software, two orders need to be placed within a few milliseconds. Containing the same item. And the item has to be very low on stock so the two orders cannot be fulfilled. That is a very very rare event. It's almost a theoretical event only.
I have seen a bunch of companies grow an online business. There were tons of pain points. This race condition was none of them.
It may be rare in this context. But a common requirement in asset management is eg: account holder X has a regulatory constraint that they may only control 5% or less of securities in the ABC industry across all accounts. So all trades must be validated that they don’t cross this line. In real life, the account holder may decide to sell shares of one company from the ABC list from one account so they can buy other shares from the list in another account.
The serializable isolation level does not guarantee reporting this correctly, and a concurrent process reading account balances could see a situation where account holder X is out-of-bounds of their regulatory constraints, despite the account holder themselves taking great pains to act lawfully.
Right, this is what happens with eventually-consistent systems. In the case of a retail system most failures are easily resolved by either resulting in delayed delivery, drop-shipping, or refunding the customer's order. It's not the end of the world.
Even banking works like this because settlements are done asynchronously. Some years ago there were several instances of an attack where someone got a hold of a rich middle easterner's debit card number and PIN, found a way to set the withdrawal limit up, then had crews of people hitting every ATM in Manhattan in one day. They were able to steal millions of dollars because the balance check was in real-time but the settlements weren't (so the balance was not drawn down).
While this is cool to harden your transactions, as more locks you add as more deadlocks and blocks appear, and it becomes harder and harder to keep transactions fast and isolated from each other. For example if you do select for update and record is not found, DB should lock key space to prevent adding such records, did you suppose to do this and block other inserts/updates? It's often easy to find problem with transactions in code, but very hard to fix all consequences.
By the way because of this instead of select for update for single record I prefer update with no change. If there is no record, nothing will be locked.
Blocking phantom reads/writes is usually only that of a serializeable transaction isolation level - and a range lock shouldnt block other writes, unless they are of the exact range specified, and then you are asking for a consistent view of reality.
Generally I dont find adding more locks hurts things that are well designed - often you will find deadlocks turn into just waiting for your turn in line - and then you can tune your db code or your hardware or your whatever to maximize that throughput.
Also fwiw if you are doing any sort of upsert and you are not locking for the update/insert combo, then you are definitely opening the door to a duplicate key insert - as long as you handle that though, nbd.
Along with transactions, I also like to use 'Add' or 'Subtract' operations which usually provide better reliability instead of manually reading and setting, being a source for data races.
Yes! The default isolation level in postgres is READ COMMITTED which is rather weak and allows for the types of concurrency bugs discussed in the article (unless you use SELECT FOR UPDATE or some other kind of locking).
Very interesting. I wonder if they implemented the nested transactions aspect of 2AD (paper is dated at 2017, and they said they were working on it). That would make it extremely useful in pentesting.
ABSTRACT
In theory, database transactions protect application data from cor- ruption and integrity violations. In practice, database transactions frequently execute under weak isolation that exposes programs to a range of concurrency anomalies, and programmers may fail to correctly employ transactions. While low transaction volumes mask many potential concurrency-related errors under normal operation, determined adversaries can exploit them programmatically for fun and profit. In this paper, we formalize a new kind of attack on database-backed applications called an ACIDRain attack, in which an adversary systematically exploits concurrency-related vulnerabil- ities via programmatically accessible APIs. These attacks are not theoretical: ACIDRain attacks have already occurred in a handful of applications in the wild, including one attack which bankrupted a popular Bitcoin exchange. To proactively detect the potential for ACIDRain attacks, we extend the theory of weak isolation to analyze latent potential for non-serializable behavior under concurrent web API calls. We introduce a language-agnostic method for detecting potential isolation anomalies in web applications, called Abstract Anomaly Detection (2AD), that uses dynamic traces of database accesses to efficiently reason about the space of possible concurrent interleavings. We apply a prototype 2AD analysis tool to 12 popular self-hosted eCommerce applications written in four languages and deployed on over 2M websites. We identify and verify 22 critical ACIDRain attacks that allow attackers to corrupt store inventory, over-spend gift cards, and steal inventory.
The value of tfa is the work they put into practical demonstration and analysis. Anybody can sit in their armchair and speculate about theoretical exploits. It's the actual practical exploits and mitigation techniques that matter.
Basically use the web APIs to issue two spend requests almost simultaneously, and the fact that it takes time for the database to synchronize means you can double spend. As the posters above are saying, it’s very similar to a race condition.
Researchers found a way to analyze database logs to identify race conditions in databases, and demonstrated how once this knowledge acquired, it left websites open to be exploited through their public APIs.
There is something based on Bloom IIRC, but the whole concept should be well-described and discoverable from the Keeping CALM paper: https://arxiv.org/abs/1901.01930
The ORM part is a thin wrapping that is there to explain Haskell enough to have type-checking. The goal is really to have Haskell type-check the requests you make.
I have a Chrome extension which is perfect for race attacks. https://github.com/sakurity/racer