There's an interesting tale related to this that occurred in 1993. On July 4th 1993 a plant that made epoxy resin for semiconductor packaging in Japan exploded and was destroyed by fire. As a consequence there was a significant spike in DRAM prices that summer. I happen to remember this specifically as that summer as I was on a one month work term at a software house where I got to work on the POSIX environment running under VMS on a DEC Alpha. My task was to get their test suites runing on the Alpha (mostly a bunch of shell scripts). When I went to purchase a couple of SIMMs to upgrade the memory of my Amiga, the prices had more than doubled relative to the month before. A year or two after that low profile DRAM packages started to show up in SIMMs that had no more than a millimeter of height to reduce the amount of epoxy used in each chip.
I find it interesting that prices seem to have roughly reached close to their lowest points around 2015 -- it's gotten a little cheaper at times, but it's hovered around that ever since.
And it wasn't that much cheaper even in 2012, not compared to how much the prices plummeted up until that point.
Have we reached the bottom of RAM prices? When could we expect prices to move sharply down again, if ever?
I've noticed that for a while (with many exceptions, of course, purely anecdotal), you can generally double your ram for the same price about every 3 to 5 years. That's also about as often as you see process refinements leading to higher yield/surpluses along with newer models that benefit from process shrinks.
Ram chip packages themselves have always been about the same size on the stick, so I figured it was a shrink that led to higher capacity for roughly the same amount of wafer used... so actual costs remained relatively consistent as it's the size cut from the wafer itself that tends to determine the costs for many things.
Generally spitballing it here, but that would be my take on why prices have hovered around that $90 to $180 mark for two/four sticks of a decent capacity. I've had to budget about $200 for ram for nearly every build I've done in the past two decades.
Looking back to an old Newegg order from my last PC build in 2012, I paid $84.99 for 16GB (4x4GB) DDR3. Today (9.5 years later) it looks like ~$150 at least for 32GB.
2x16GB kits of DDR4 3200 were was $110 last June. I have a receipt from purchasing four kits of it. The same Ripjaws V kit is on "sale" for $160 today.
I also built my last PC in 2012 but I paid over $500 for 32GB DDR3 overclocked at 2400MHz which is much more comparable to the DDR4 you'll find today (mostly 2400MHz+, with some laptops at 2133)
Not really. Fabs don't become obsolete for 20-30 years because there is ample demand for processes that are decades old. Profit margins drop but by the time they drop significantly, the fab investment is generally paid off and it becomes a gravy train. AFAIK Intel's last plant to close was D2 in Santa Clara - it was built in 1989 and closed in 2009 after 30 years. Their earlier Chandler, AZ plant closed after 20 years.
Sure, oversimplification in my question I guess - the returns on older fabs don't go to zero, but their products fetch a margin that's a function of how close to bleeding edge the process is. But the question still stands.
(btw fab site lifecycles don't usually correspond 1:1 to fab equipment generations, they tend to get refitted, this was also the case in Intel's D2 which had an old process by the time of retirement but not 30 years old)
The big costs are the variable costs per chip though, which is why technology shrinks have delivered such great cost savings for the end product despite the fact that every shrink has required ever greater outlays in development costs and capex for the manufacturing plant. The saving in the variable costs are so large that they pay for all that and the price drops.
So even if you have entirely paid off an old fab or bought one outright for ~$0 at a fire sale, you probably won't be able to make DRAM at a hugely lower overall cost. Slightly lower, perhaps.
I could well be wrong, sorry I should not have tried to make such a strong statement I don't actually have much knowledge of recent technology nodes.
In your first link I'm struggling to see it demonstrate what you say. Actually that's more what I'm familiar with.
Obviously it's quite old though. The second link does seem to show the trend going to equipment costs and trending upward indeed. Although it doesn't seem very detailed, I suppose that first graph on the left on page 9 looks like mostly fixed costs in orange. Thanks for the link.
CMOS process scaling is running out of steam which is generally going to make performance/capacity/etc. improvements in things dependent on semiconductors increasingly difficult.
Could be a myth or not, but it was said back in 1993 that a fire in a cresol chemical factory in Japan made DRAM prices to go up, which delayed the adoption of Windows NT.
Semi related: Using some Unix flavor in PCs was a desire for many, but always postponed because it took twice the RAM as Windows. First heard about Linux in 1996 but started to play with it in 1998 when I could afford a separate PC with 64MB of RAM to run it comfortably.
1986 US-Japan semiconductor trade agreement is of singular importance and merits study. I recommend Trade Politics and the Semiconductor Industry (1994), an NBER working paper.
It's interesting that trade wars (Japan, China) were involved in both chip shortages, 1988 and 2020. What's more interesting is that the exact roles played by the trade wars are equally unclear, because both shortages are caused by multiple factors and the trade war's role are heavily politicized.
In 1988, the U.S. computer makers were attacking the US-Japan semiconductor pact, claimed that it reduced Japanese production and export volume, thus responsible for triggering the U.S. chip shortage, said it was helping one industry but hurting another. Industry leaders like Compaq, Atari, Apple were lobbying for abolishing the Semiconductor Pact restrictions. On the other hand, the U.S. semiconductor industry denied this claim, and argued that raising demands and production issues were the real problem, not the Semiconductor Pact. Both sides had some persuasive-sounding arguments.
In 2021, the situation is almost exactly the same. It was claimed by Huawei and others that the US-China trade war triggered the chip shortage due to panic buying of large volumes of chips in fear of U.S. sanctions. Another claim is that some orders to Chinese SMIC were canceled in favor of TSMC and Samsung due to the trade war, reducing the supply further. Meanwhile, others denied these claims and said the main problem is the raising demand.
History never repeats itself, but it rhymes. Both trade wars definitely played a role, if not a major, at least a minor one. But to what extent, exactly? Difficult to tell.
This really brings back memories (no pun intended) of getting a computer shopper and thumbing through the THOUSANDS of computer part vendors in the 80's and 90's and their ads
It was a really sublime time to be alive in the PC industry. Expansion cards, CPUs, SIMMS, video cards, etc.
I remember the NES game Zelda II, The Adventure of Link, was delayed. Nintendo Power or some other NES magazine had a quote from Link saying he was sorry but there was an issue with some chips.
I forgot all about that delay until seeing this headline, and Zelda II is mentioned specifically in the article.
Certain developments in console and arcade games correllated directly with DRAM pricing at that time. Before 1985, music was usually limited to short jingles and the game content usually just had one "world" to it; afterwards, in the face of rapidly falling prices, nearly every new release had a soundtrack of a few minutes length and multiple themed settings. The 1988 shortage led to some games with cut content, but more often just delays on release as with Zelda II.
In 1988 I was working at Mentor Graphics, in Beaverton, Oregon.
One day somebody mentioned that, at some unknown time in the previous three months, somebody had raided all the Macs in managers' offices, removing all but one memory stick in each. Nobody had noticed.
Also: I found an invoice I paid in the early '90s, $1000 for eight one-megabyte RAM SIMMs. Yes, megabytes. They filled out an Apple SE/30 I used to run Apple A/UX. (A/UX was Apple's Unix, System V with Berkeley extensions, and a Mac System 7 GUI emulator, which worked.) It compiled Gcc in about 2 hours. It had–well, has, still–an 80MB disk drive.
I only just recently found out I could have put in 4MB sticks not long after, and had 32M instead, which would have felt infinite.
This is one of the big reasons OS/2 crashed and burned. Version 1.1 released in 88 was the first version with presentation manager. And you really wanted a 4MB system, which just became insanely expensive. Windows/286 still ran in under a megabyte and Windows/386 could multitask MSDOS, something the FOOTBALL OS/2 betas could do, but IBM pulled from the product.
> A two-megabit chip, for example, was $599 in September 1985, making the cost-per-megabyte for larger-capacity RAM $300, per McCallum’s analysis.
The numbers here make no sense, and it is because they don't match the source analysis: It's a two megabyte chip, for $599, that appears in the analysis. (Thus, $600 for 2 MB = $300/MB.)