Hacker Newsnew | past | comments | ask | show | jobs | submit | bjourne's commentslogin

Your summary of the article is wrong. The authors model temperature using time series over solar irradiance, volcanic activity, and southern oscillation. They calibrate that model using time series over global surface temperatures. This allows them to isolate and remove each of the three listed confounding factors. The resulting time series fits a super-linear curve -> accelerating global warming.

> Your summary of the article is wrong. The authors model temperature using time series over solar irradiance, volcanic activity, and southern oscillation. They calibrate that model using time series over global surface temperatures. This allows them to isolate and remove each of the three listed confounding factors.

No, it isn’t. You’re just rephrasing what I said with more words: they attempted to adjust for three of the biggest factors that affect temperature, then did a piecewise regression to estimate a 10-year window.

You can’t do it in a statistically valid way. Full stop. The authors admit this, but want you to ignore it.


> You can't do it in a statistically valid way.

They use an established methodology (https://doi.org/10.1088/1748-95 9326/6/4/044022 - the methodology retains the average warming rate over the period since 1970 while smoothing fluctuations) to remove predictable temperature variations so they can isolate the effect they are trying to measure.

Just because they don't know exactly what past global temperatures would have been in the absence of El Niño doesn't mean it's statistically invalid to try and account for it.

Besides, temperature data to 2024 already shows accelerated warming with a confidence level that "exceeds 90% in two of the five data sets".

Add another year or two and it's likely we won't even need to smooth the curve to show accelerated warming at 95% confidence.


They used a published methodology. That doesn't mean the methodology is uncontroversial, and it certainly doesn't mean that they used it in a way that makes sense in the current context. One can commit an almost infinite number of horrible abuses via bog-standard linear regression.

Even setting aside the dubious nature of the adjustments, doing a regression on a 10-year window of a system that we know has multi-decade cycles -- or longer -- is just blatantly trying to dress up bad point extrapolations as science. Then, when they don't get the results they want to see from that abuse, they start subtracting the annoying little details in the data that are getting in their way.

> Just because they don't know exactly what past global temperatures would have been in the absence of El Niño doesn't mean it's statistically invalid to try and account for it.

You can't go back in time, invent counterfactual histories by subtracting primary signals, and declare the net result to be "significant". This isn't even statistics -- it's just massaging data via statistical tools.

> Besides, temperature data to 2024 already shows accelerated warming with a confidence level that "exceeds 90% in two of the five data sets".

https://xkcd.com/882/

> Add another year or two and it's likely we won't even need to smooth the curve to show accelerated warming at 95% confidence.

I guess we'll find out.


If you were trying to determine if the quantity of daylight increased over a week in spring, would you account for the differences caused by day and night? What about cloud cover? Or is that just massaging the data?

p.s. the cited methodology has >300 citations in peer reviewed publications, ref Web of Science


> If you were trying to determine if the quantity of daylight increased over a week in spring, would you account for the differences caused by day and night? What about cloud cover? Or is that just massaging the data?

Just to draw a better analogy to the low quality of the current work, let's say you wanted to compare average daylight last week, globally, to all of recorded history. Then you made a model that had terms for (say) astronomical daylight, longitude, latitude and, I dunno...altitude of the measurement. Then you made a regression, subtracted three terms, and claimed that the residual was still "significantly darker". Then you run around waving your arms and shouting that if we only extrapolate forward N weeks from last week, soon we'll be living in a fully dark world!

You'd be rightfully laughed out of any room you were in.


Actually, I used fewer words. I don't think you understand what the authors are doing. They are modeling temperature T per year as a sum of four terms: T = E + S + V + R---(E)l Nino, (S)solar irradiance, (V)olcanic activity, and (R)emaining factors. Then they subtract E, S, and V. Then they show that R fits a super-linear curve. Why there would be no "statistically valid way" to do this is beyond me, the authors, and the article's peer reviewers. If this is "bad methodology", lodge your complaints on https://pubpeer.com/.

1) Their model is inherently dumb. The system is much more complicated and inseparable.

2) They openly admit that “subtracting E, S and V”, as you say, cannot actually be done.

3) They’re arbitrarily removing sources of variation so that they can claim “significance” in a narrow window. The entire exercise is designed to achieve a predetermined outcome, and statistical significance cannot be calculated in those circumstances.


Pre-prints exists because it can take up to 18 months to get a paper published in a journal or reputable conference. Since lots of people can publish pre-prints[1] what you should think depends on the authors. If they have a record of publishing good research you should think highly of the paper.

[1] - Actually, there are hoops on pre-print repositories, such as arXiv, so not everyone can post there. I guesstimate that 99% of the public has no means of posting on arXiv.


> there is a lot of talk between Anthropic and the DOW about adopting LLM technology for warfare

Please cite that talk. Fully autonomous weapons and mass domestic surveillance is not the same as "adopting LLM technology". Please be precise.



IME, random bitflips is the engineer's way of saying "I'm sick and tired of root cause analysis" or "I have no fucking clue what the bug is." I, like others, remain skeptical about the claim.

We're not talking about unexplained bugs here. We're talking about a pointer that obviously has one bit flipped and it would be correct if you flipped that one bit back.

“I have no data, but I’m sure those who do have data, and have spent a significant amount of time analyzing it, are wrong.”

Well, touché. But I'm willing to change my mind once I've seen that data and the methodology Svelto used to analyze it. Extraordinary claims require extraordinary evidence.

I maintain my Python OpenCL wrapper for my own personal use: https://github.com/bjourne/myopencl No plans to monetize.

Is it fast? The proof of the pudding for hardware acceleration is whether it runs faster than a GPU (or CPU).

That's quite a preemptive form of preemption! Was the US intelligence from the same source that stated that Iraq was acquiring "yellowcake" from Niger?

This justification for bombing Iran is dumb as fuck. In a few days the number of civilians killed by US-Israeli bombings will surpass the number of civilians killed by the regime in decades.

Possibly.

What is that threshold? I've heard anywhere from 3k to 300k. You can definitively answer this question?


300k? You mean 30k right?

Iranian official numbers are 3.5k. the OSINT community say at least 15k in the 3 biggest cities (including peo-regime guardias of the revolution), and 'local' journalists (a lot with CIA ties though), not friend of the system say 30k.

I wouldn't trust Iran with a butter knife, so I imagine between 15 and 30k, including 1 to 2k 'guardians'


> 300k? You mean 30k right?

30k was just the last protests, they talked about the entire regimes crimes which is much much more.


Let's count. Power consolidation (post-revolution): 10-20k. 100k during the first gulf war, but I think you should put that on the US (and maybe Irak, but it's the US that pushed Irak to attack Iran), then a bit more than 50 execution per year on average for 30 year. 100-300 in 2019/2020, and 15k-35k for the 2025/2026 protests. So even if you take the higher bound, that's 66k max, and if you count the gulf war (which was defensive, against US-led Iraki), 166k. But a reasonable estimate would not count the gulf war, and would be 35k over 40 year.

Weirdly, that's less than the number of saudi Arabia slaves who died in the last 20 years. But most of them are African, so they don't count, if I understand why Saudi Arabia are our allies.


The 15-35k for protestors killed is a complete fabrication. No verifiable sources corroborate that figure. Media has a tendency to report figures based on nothing. Then those figures get established as the truth, which shifts the burden of proof. Thus, unless one can prove that 15-35k protestors wasn't killed the myth lives on.

Killing more people won't bring dead people back to life! I can't believe I have to spell this out.

> This justification for bombing Iran is dumb as fuck. In a few days the number of civilians killed by US-Israeli bombings will surpass the number of civilians killed by the regime in decades.

I was just curious if you had information that I don't have. I suppose not.


Does your theory pass the sniff test? How reasonable is it to believe that Greenpeace's "defamation" cost the company hundreds of millions of dollars? Why is $345 the correct three-digit number of millions for the reputation damage Greenpeace caused?

Operation Epstein's Fury

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: