Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And can sustain the top clocks with proper cooling, contrary to laptops.


Laptop cpus might be able to sustain top clocks with proper cooling, but we'll never know.


What? There’s a lot of laptops that hit their top clocks with no issues. The main problem is that doing so seriously reduces your battery duration and emits significant noise.

Pick anything over the 800$ range that isn’t a macbook and you’re way more likely to hit clocks than not.


The problem isn't hitting the top clocks, but sustaining them. Only the biggest "desktop replacement" class laptops tend to be able to sustain 120W+ CPU dissipation.

Note that workstation-class (H) laptop CPUs also make compromises on performance - the Ryzen 9 4900H is 8C16T but only has 4MB L2$, 8MB L3$, and a max TDP of 54W. A desktop Ryzen 9 3950X by comparison is 16C32T and has 8MB L2$, 64MB L3$, and a 105W default TDP (and will go much higher with even basic PBO if your cooling allows). The differences on the Intel side are even starker.


As an aside you have an interesting notation, using $ to substitute for cache, using that symbol never once crossed my mind hah


Hmm, I suppose it's fallen a bit out of favor recently, I don't see it quite as much as I used to, but it's pretty much standard terminology in the semi industry showing up in everything from academic papers to data sheets (commonly you'll see that as D$ and I$ - data and instruction cache).


Perhaps why I've not seen it before is likely due to localization - I'm in the UK and of course the Pound has it's own symbol, thus the connection of $ -> Cash -> Cache may not be so readily made.


> Only the biggest "desktop replacement" class laptops tend to be able to sustain 120W+ CPU dissipation.

Yes. And as a bonus, I can use my Alienware 17 R4 as a throwing weapon. Or for workout. And the power brick is a perfect cup warmer.

But I like it, nonetheless.


Hey! I never thought of using the brick to keep my coffee warm. It is even the right size. Excellent! Thanks....


Ignoring the discussion around TDP figures, I’m well aware of the power limits, but that was (IHMO) not the point of OP.

It’s obvious that it’s impossible at the moment to get 3950X performance in a laptop format, but you can get laptops able to keep temps reasonable with 35-50W, and that’s what a lot of laptop SoCs target as total power.

Those SoCs hit (and sustain) their top clocks, whatever those are for that specific SKU.

What I understood from OP is a common complaint for macbooks, that fail consistently to sustain their specified to clocks, because Apple deliberately under specifies their cooling solutions for better ergonomics (and design reasons).


> Those SoCs hit (and sustain) their top clocks, whatever those are for that specific SKU.

This is actually completely wrong. Almost no laptops sustain their top (boost) clock on heavy workloads. Most usually only sustain max performance for minutes (or seconds!) before throttling. Here's an example chart that shows how various premium Athena/Evo U laptops perform: https://www.notebookcheck.net/Asus-Zenbook-S-UX393JA-Laptop-...

On the workstation side, people complain about Macbooks, but recent MBPs actually throttle their Intel H processors less than a comparable XPS 15 for example: https://www.notebookcheck.net/Apple-MacBook-Pro-15-2019-in-r...

If you are interested in how modern Intel laptop chips throttle and what base and boost clocks mean, you want to do a search for PL1, PL2, and Tau. For AMD chips, you will want to look up STAPM, Fast and Slow PPT.

Note that while a i7-10875H's top "boost" clock is 5.1GHz, the sustained "base" clock is only 2.3GHz. This is so low to be meaningless as a top speed. In practice, unless your laptop's cooling is absolutely terrible, you'll probably end up mostly running in the 3-3.5GHz range under full load. In comparison, on a properly cooled desktop system, a same-gen i9-10900K desktop system should be able to maintain a sustained (all-the-time) clock of about 5GHz (very close to its 5.3GHz boost). AMD chips scale a little bit better due to 7nm having better power efficiency and how PPT works, but the same ratio roughly applies.


I’ve actually worked with Ryzen SoCs myself (I do board design).

This is the problem with all the marketing BS. When I talked about “top clocks”, I wasn’t referring about “boost” clocks. I’m talking of the clocks that the SoC is designed for (that won’t appear on the box), i.e. a lot of laptops are not leaving “performance gaps” due to bad cooling, those SoCs are designed for that level of performance and attaching a fat copper heatsink won’t do much difference.

I had a lot of trouble with a client that complained that our board was not properly designed because the performance they were seeing was not “as advertised”. In the end we had to ship the whole thing to AMD, and have them test the system with a thermal sink. Everything was as expected.

If anyone is interested in this kind of stuff, your explanation is really good, so I won’t add anything because I’d probably do a terrible job :)


I agree that there's a lot of misunderstanding on clocks - I've been on the other end - evaluation and validation of embedded boards, including V1000 Ryzen SoCs, but I think I'd disagree somewhat with the characterization of clocks as purely "marketing BS."

Back in the day, most CPUs had had a fixed clock, but these days modern Intel and AMD chips simply don't - they all clock opportunistically, which depends on powers, thermals, but also workload (try running an AVX-512 loads for example). How do you characterize "clock" in this context? Base (minimum) and Boost (hard limit, now split to Max Turbo <2C and All Boost MC) seem to be reasonably sensible numbers.

Now we can argue semantics all day, but to bring it back around if you're just going to say "top clocks" is what the SoC was designed for at a specific workload/power envelope (In AMD's PB, that'd be PPT, TDC, and EDC) then every laptops will "hit their top clocks with no issues," but I'd say that argument (statement?) is a bit circular/pointless. ;P


Well, you can get a 3950x on a laptop, capped at 95w TDP. XMG, Cyberpower and Schenker sell those (manufactured by Tongfang)


My 3950x got to 315W socket power with OC. It wasn't stable, as it was just testing.


Can you keep them in prime 95?


Yes, but only with ear protection and while the battery can keep up. When the battery is out of power, the computer shuts off. Need to buy a better powersupply than the one it came with...


Even SSDs throttle under heavy use.


Sure. The NVMe that gives you 2GB/s write may throttle to only 500 MB/s after a few seconds of writes, but--

How often is that a problem, really?


NVMe throttling from 2GB/s to 0.5GB/s is usually not thermal, it is an SLC cache exhaustion that brings a drive back to MLC/TLC packing mode. One can get samsung pro or a similar ssd and it will retain 2GB/s indefinitely. On cheaper drives, having a lot of free space (i.e. slc cache) may help. Not sure though what process can generate such sustained write bandwidth beyond few things like copying or video transcoding.


Which is why I replaced the cheap drive in my build desktop with a Samsung Pro NVME. It made a significant difference.

Multithreaded builds (make -j 24) can really hammer the drive. Read and write interleaved, which uses up cache in both directions.


I've used their Pro NVME drives as my OS drive for the last few years. Never had any issues with read write speeds either


Two words: Heat Sinks

A well proven way to move heat out of a silicon package to prolong its ability to perform at its highest potential. And in a desktop, you've got room for 'em.


What I’m told you need heatsink for PCIe SerDes and maybe CPU on it but NAND likes warmer temperature


On an NVME? You don't need heatsinks of an NVME. You have much bigger issues in your case if you think heatsinks will help on an NVME.


Are you suggesting NVME heatsinks are a marketing gimmick? I beg to differ.


No, that's wrong. A heatsink on an NVMe helps to let it run its top speed longer, or forever. Has nothing to do with issues in your case, those things just get warm, and before it gets too hot the heat has to be transported away or they will throttle.


Specifically, a lot of NVMe are right next to GPU placed in PCI slot 1, which is a pretty constant source of heat under load.

I installed an EK spreader sandwich on my Samsung NVMe drive, and it made a massive (20° c) difference over stock. It was previously a bare stick with no surface area/thermal sink to pull heat away.


What's an "EK spreader sandwich"? Can you send a link to yours as well?


No. NVME's are rated to run at 0-70 degrees. If your case makes your NVME run at > 80+ degrees you should prob fix the airflow in your case as your prob running your CPU/GPU rather hot.


You do not need a case for that! The NVMe heats up on its own. There are a bunch of reviews going into that topic, because it just matters for high end NVMe drives. See for example https://www.computerbase.de/2020-09/samsung-980-pro-ssd-test... - though the 980 Evo did well without an additional cooler, and here it's indeed depending on the airflow.

You seem to assume it's the case that heats up to 80C or more, that's not true and also not necessary for NVMe SSDs to go over their limit. Your CPU/GPU has fans moving the heat away, that's a better position to be in...


I said if your nvme is reaching 80+ degrees then you should fix the airflow in the case. I did not say the case is 80 degrees. Any semi decent airflow is 100% capable of running an nvme without causing any performance issues.


As you saw in the article I linked that's just not true. Those temps are with airflow, and SSDs go over their temp limit.


Nope. That article states the the ssd was 'naked' which leads me to believe they removed the sticker off it. Those stickers/labels are use to dissipate heat from the controller, so you, you know, don't need a heatsink. They act as a heatsink.

https://www.youtube.com/watch?v=KzSIfxHppPY&t=375s


No! Why would you assume crazy modifications like that? They write "naked", it's naked as in not using the heatsink the mainboard supplies. https://www.gigabyte.com/us/Motherboard/X570-AORUS-MASTER-re... shows the cooler they are talking about.

You will just have to accept that you've been wrong on this topic. Move on, it happens.


No. Lol. Cooling anything other than the controller is pointless. You don’t get throttling from nand. I have the gigabyte x570 elite. That nvme cover does nothing. And unless you’re benchmarking the nvme I highly doubt you’re going to heat up the controller enough to notice it. So yup. I agree. You’re wrong. Let’s move on.


Man, they are benchmarking the whole SSD. I actually sent you the graph. The SSDs go over 80C and then throttle. Not all of them, but enough of them to be a real problem. The NVMe covers actually work in reducing the temperature, that's also in the graph. You are wrong and you seem to have no knowledge about this topic at all, you really should not be so confident.

Read and learn! And then accept when an initial assumption turns out to be wrong.


It’s not an assumption. NAND does not throttle. The controller does. Unless you run the SSD constantly like in a benchmark you are unlikely to heat the controller up to make it throttle. The airflow of a case is enough to keep an nvme within limits that everyday usage is not going to be hindered in anyway shape or form. You don’t need a heat sink. Linking to a constant load benchmark doesn’t change that. You’re wrong.


I do not understand how you can still think that after my explanation and after looking at the graph I sent you. One very last try - though I assume in the best case it's for potential other readers.

Go to https://www.computerbase.de/2020-09/samsung-980-pro-ssd-test.... look at the graph. You see that a bunch of them go to the 80C line or hover above. All of them throttle (what you said does not happen). In that graph are shown, going above the limit:

1. FireCuda 520 1TB

2. Patriot Viper VP4100 1TB

3. Samsung 970 Evo 1TB

4. WD Black SN750 1TB (+ the same one with a cooler)

This is only a small part of the market of course, but it goes to show that the throttling is a real thing that happens with multiple models.

Then you had a moving goalpost there, that those SSDs do not throttle under realistic workloads. However, this is a sequential read that's only 5 minutes long. Hardly unrealistic. The hour long constant load benchmark is a different graph, however, constant load is also realistic if it's longer than 5 minutes.

If you activate the other chart modes you see the measured performance, which shows the drops linked to the too high temperature, and that they did the same thing for write performance.

You can counteract this with a lot of targeted airflow and/or a heatsink, the heatsink will at least help move the throttling to a later moment. Gamersnexus had a very impressive demonstration of this, one where they did get this wrong: They had an article about a MSI SSD heatsink where they claimed it did not help (so the SSD did throttle! Again something you said does never happen), where it then turned out that it did not work only because their applied temperature sensors (glued them to the heatsink), and IIRC they also missed the higher performance they got regardless. GN often gets it right, stuff like that happens, but it made this one memorable and highlighted the positive effect of these heatsink coolers.

I'm into this topic professionally for years now. I'm not wrong here. If you can't take my word for it, look at professional SSD reviews, they have covered this also for years now.

And sure: There are scenarios where this does not matter. Gaming. Browsing. But: In those workloads there is no significant difference to a SATA SSD anyway. These NVMe SSDs are only interesting if you have large (and thus: long) file transfers. This is what they have to get right (and some do, but not all of them).


It’s like you’re not even reading what I’m saying. So I’ll just end it here. You’re wrong.


I'm not wrong, you are.

By the way, by repeating that I'm wrong and by starting with a straight "No", by always commenting without reasoning and politeness, you made sure that I will correct you - and that I'm not buying into your strange attempts to correct your statements to something that is correctish. They don't work anyway, these SSDs throttle.

You should change your tone around here.


> They don't work anyway, these SSDs throttle.

But you don't know what causes the throttling. Its stupid to shove a heatsink on nand. It does not help. Not a single bit. Period. Under any normal daily usage, or even if you had a workstation, you're not going to be reading/writing so constantly frequently that you're ever going to cause the controller to heat up and cause throttling. If you experience any excessive heat. You have bigger issues in your case. Period. Your only proof of throttling is benchmarks running constant read/writes over a period of time. This is not real world usage and doesn't make it necessary to go out and start shoving heatsinks on every single nvme drive. If that was the case then all the laptops which have space between the nvme and the case, or motherboards which lack a 'heatshield' like the gigabyte board you linked to, would have throttling issues. Which they don't.

> You should change your tone around here.

So now you're threatening me?

-----

Anyway I'm done, not gonna sit here an argue anymore.


How is the lifetime of a SSD run around 70° vs the same SSD with a slightly bigger heatsink that runs at 60°?


I would run some tests though.

There are all kinds of vendors / models, who knows what kind of throttling they use?


The point being let the user decide.

Having a fast machine that can sustain throughput and I/O is a perfectly rational desire, and for some of us, need.


If you do a lot of video work, daily!


Every time you recompile. Many times per day.


That must be one fast compiler.


Do your compiles write multiple gigabytes of data? Within seconds?


If you're running multithreaded build jobs with ninja and have many cores then maybe?


Unless you compile multi-gigabyte targets, all writes will likely fit in a ram cache and thus cannot be a real bottleneck. That is assuming your compiler farm can read and compile at GB/s level, which is pretty unrealistic.

To test that, one can try it with ramdisk first, before getting an expensive ssd.


And that is something else that is alien to laptops. Best laptop I could find (for when I am away from my desktop) is 8c16t, that makes a massive difference for compilation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: