Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In case you confused this with webgl as I did:

> WebGPU is a new API for the web, which exposes modern hardware capabilities and allows rendering and computation operations on a GPU, similar to Direct3D 12, Metal, and Vulkan. Unlike the WebGL family of APIs, WebGPU offers access to more advanced GPU features and provides first-class support for general computations on the GPU.



And to prevent device fingerprinting, all the operations are specified to deterministically produce the same bit-exact results on all hardware, and the feature set is fixed without any support for extensions, right?

Or is this yet another information leak anti-feature that we need to disable?


There is no way to escape fingerprinting.

Just one example: A script which runs many different types of computations. Each computation will take a certain amount of time depending on your hardware and software. So you will get a fingerprint like this:

    computation 1: **
    computation 2: ****
    computation 3: **********
    computation 4: **
    computation 5: **************
    computation 6: ************
    computation 7: *********
    etc
There is no way to avoid this. You can make the fingerprint more noisy by doing random waits. But thats all.


Just put WebGL/WebGPU behind permission and the problem is solved. I don't understand why highly paid Google and Firefox developers cannot understand such a simple idea.


For a user to correctly answer a permissions dialog, they need to learn programming and read all the source code of the application. To say nothing of the negative effects of permission dialog fatigue.

In practice, no-one who answers a web permissions dialog truly knows if they have made the correct answer.

Asking the user a question they realistically can't answer correctly is not a solution. It's giving up on the problem.


I think browsers should distinguish more aggressively between "web application", "web site", and "user hostile web site".

Many APIs should be gated behind being a web application. This itself could be a permission dialog already, with a big warning that this enables tracking and "no reputable web site will ask for it unless it is clear why this permission is needed - in doubt, choose no".

Collect opt-in telemetry. Web sites that claim to be a web application but keep getting denied can then be reclassified as hostile web sites, at which point they not only lose the ability to annoy users with web app permission prompts, but also other privileges that web sites don't need.


Clearly if we knew how to perfectly identify user hostile websites we'd not need permissions dialogs at all.

Distinguishing between site and app, e.g. via an installation process, is equivalent to a permissions dialog, except that you're now advocating for one giant permission dialog instead of fine-grained ones, which seems like a step backwards.


Yes, if we knew how to do it perfectly, we wouldn't need them. But we can identify some known-good and known-bad cases with high confidence. My proposal mainly addresses the "fatigue" aspect: it allows apps to use some of the more powerful features without letting every web site use them, and it prevents random web sites from declaring themselves an app and spamming users with the permission request just so they can abuse the users more.

The new permission dialog wouldn't grant all of the finer-grained permissions - it would be a prerequisite to requesting them in the first place.


SafeBrowsing filters out the known bad ones.

Curating known good would equate to some sort of app store. There are probably initiatives to make one for web apps, but it kind of makes me sad to think of applying that to the web, which is supposed to be a free and open commons (although I suppose Google already de facto controls enough of it to be considered a bit of a gatekeeper).

Making the user the arbiter of "known good", ie reliance on permissions dialogs, is not perfect but it's what we have. Yet I fail to see how your proposal of "just add ANOTHER dialog" improves the situation.


SafeBrowsing filters whatever Google wants filtered. It has only a marginal overlap with "bad sites".


Do you have something specific in mind with your opening paragraph?

Because defining what is a web site and what's an app, strikes me as particularly impractical idea. You correctly point out that yes, there are a number of powerful APIs that should be behind permissions. But there are a number of permissions already, so we need to start bundling them and also figure out how to present all this to the regular user.

Frankly, I wouldn't know where to begin with all this.


News sites are a particular category that I expect to spam people with permission prompts, as they did when notifications became a thing. Without the deterrent of possibly landing in the naughty box, they'd all do it. With it, I still expect some of them to try until they land in the box.


> In practice, no-one who answers a web permissions dialog truly knows if they have made the correct answer.

Counterpoint: if webpage with latest news (for example) immediately asks me to allow notification, access to webcamera and location I definitely know what is correct answer to these dialogs.


"Do you want to allow example.com to send you notifications" is way more understandable to a layperson than "do you want to allow access to WebGPU" or "do you want to allow access to your graphics card". Especially because they would still have access to canvas and WebGL.

Permission prompts are a HUGE user education issue and also a fatigue issue. Rendering is widely used on websites so if users get the prompt constantly they're going to tune it out.


You can always word things in a way that the user understands.

> Especially because they would still have access to canvas and WebGL.

Those should also be behind a (or the same) permission prompt.


They don't need to learn programming. Just write that this technology can be used for displaying 3D graphics and fingerprinting and let user decide whether they take the risk.


They're going to be confused if you say "display 3D graphics", because canvas and WebGL will still work. The website will just be laggier and burn their battery faster. That's not going to make sense to them.

"Fingerprinting" is a better approach to the messaging, but is also going to be confusing since if you take that approach, almost all modern permissions are fingerprinting permissions, so now you have the problem of "okay, this website requires fingerprinting class A but not fingerprinting class B" and we expect an ordinary user to understand that somehow?


Most of them will say, "I need to see this site, who cares about fingerprints." Some will notice that they're on their screen anyway, a few will know what it's all about.

Maybe "it can be used to display 3D graphics and to track you", but I expect that most people will shrug and go on.


You could maybe display the request in the canvas instead of a popup. If the user can't see it, they'll never say yes.


Just put WebGL/WebGPU behind permission and the problem is solved.

Just put WebUSB behind permission and the problem is solved.

Just put WebHID behind permission and the problem is solved.

Just put WebMIDI behind permission and the problem is solved.

Just put Filesystem Access behind permission and the problem is solved.

Just put Sensors behind permission and the problem is solved.

Just put Location behind permission and the problem is solved.

Just put Camera behind permission and the problem is solved.

Just put ...

I don't understand why highly paid Google and Firefox developers cannot understand such a simple idea.


I can't tell whether you're kidding or not, but this is exactly the path Firefox was advocating: https://blog.karimratib.me/2022/04/23/firefox-webmidi.html

The page implies it no longer requires permissions, but I just tested and you definitely get a permissions popup, just a different one.

WebHID, WebUSB and Filesystem Access are IIRC, "considered harmful" so they won't get implemented. And Sensor support was removed after sites started abusing battery APIs.


> I can't tell whether you're kidding or not,

I'm not. It's a bit of a sarcasm (?) listing a subset of APIs that browsers implement (or push forward against objections like hardware APIs) and that all require some sort of permission.

> but this is exactly the path Firefox was advocating

Originally? Perhaps. Since then Firefox's stance is very much "we can't just pile on more and more permissions for every API because we can't properly explain to the user what the hell is going on, and permission fatigue is a thing"


Everything except WebGL and WebGPU allows the system to change more state than what is rendered on a screen.

Users already expect browsers to change screen contents. That's why WebGPU / WebGL aren't behind a permission block (any moreso than "show images" should be... Hey, remember back in the day when that was a thing?).


Yes please


Saturating the user with permissions requests for every single website they visit is a dead-end idea. We have decades of browser development and UI design history to show that if you saturate the user with nag prompts that don't mean anything to them, they will just mechanically click yes or no (whichever option makes the website work).


Permission popups can be replaced with an additional permission toolbar or with a button in the address bar user needs to click. This way they won't be annoying and won't require a click to dismiss.


Like the site settings page on Chrome, which is in the address bar (clicking the lock icon)? You can set the permissions (including defaults) for like 50 of these APIs.


You can display only permissions that a page requests, starting from most important ones.

For example, toolbar could look like:

Enable: [ location ] [ camera ] [ fingerprinting via canvas ] ...


We already have extensions for websites that spam the user with unwanted popups and other displays. Those just need to be extended to cover permission abuse and be included by default in all webbrowsers.


I do this since forever, but I have to give explicit permission to load and run JS, which solves a lot of other problems as well. Letting any site just willy-nilly load code from whereever and run it on your machine is insane, and it's well worth the effort to manually whitelist every site.


uMatrix was and unfortunately still is the best interface for fine-grained opt-in permissions.


Look to the cookie fatigue fiasco for how that might turn out. This simple idea is not always the right one.


> [ ] Always choose this option.


Why fiasco?


They are highly paid enough to not work on it and smart enough to thwart suggestions like this with “permission overload issue”.

But more frankly, fingerprinting is a whack a mole issue and if it were a real security problem, it would slow feature advancements.

And fingerprinting is too unreliable for any real world use.


It's not that they don't understand it, it's that they don't want the average user to have a convenient way to control this setting. Prompting the user for permission would give the user a very convenient way to keep it disabled for most websites. It's as simple as that.

Think about it this way: Which is more tedious: going into the settings and enabling and disabling webGPU every time you need it or a popup? Which way would see you keeping it enabled?

Its tyranny of the default with an extra twist :)


> why highly paid Google ... developers

"Completely co-incidentally", it's in Google's best interest to be able to fingerprint everyone.

So, changing it to actually be privacy friendly while they have the lion's share of the market doesn't seem like it's going to happen without some major external intervention. :/


It's running on Chrome. Google doesn't need fingerprinting. By making it harder for others to fingerprint it actually cements Google position in the ad market.


> It's running on Chrome. Google doesn't need fingerprinting.

Are you saying that because you reckon everyone using a Chromium based browser logs into a Google account?


"Be kind. Don't be snarky. Converse curiously. Please don't sneer"

HN Guidelines

https://news.ycombinator.com/newsguidelines.html


They probably can understand these concepts, but privacy and anonymity are not their main priorities.


Just don’t use Chrome. There are plenty of alternative web browsers you can choose that are more privacy oriented. You are not Chrome’s customer unless you pay for it - or you have 100% money back guarantee. Demanding features on free product is never going to go anywhere.


You can reduce clock precision, which has already been done to mitigate speculative execution attacks. You can delay network requests to prevent the JS from using the server as a more precise clock. In addition to random delays, you can quantize execution times by only responding in 100ms increments, for example. You can do lots of things to mitigate fingerprinting, if not completely prevent it.

But then you could also just omit features that have no reason to exist in the first place.


Or everybody can just buy the same (i)Phone :)


You only get fingerprinting from your method if the variation of the “fingerprint“ between two different runs by the same user is lower than the difference you get between two different users. This is far from obvious since it depends a lot on the workload running on the machine at the time.

I'm not aware of a single fingerprinting tool that primarily use this king of timing attack rather than more traditional fingerprinting methods.


Not sure if the workload makes a difference.

We would have to make examples of what Computation1 is and what Computation2 is to make a prediction if certain types of workloads will impact the ratio of their performance.

Example:

    s=performance.now();
    r=0;
    for (i=0; i<1000000; i++) r+=1;
    t1=performance.now()-s;

    s=performance.now();
    r=0;
    for (i=0; i<1000000; i++) r+="bladibla".match(/bla/)[0].length;
    t2=performance.now()-s;

    console.log("Ratio: " + t2/t1);
For me, the ratio is consistently larger in Chrome than in Firefox. Which workload would reverse that?


Fingerprinting in the usual sense the term isn't about distinguishing Chrome from Firefox, it's about distinguishing user A from user B, … user X reliably in order to be able to track the user across website and navigation sessions.

Your example is unlikely to get you far.

Edit: in a quick test, I got a range between 8 and 49 in Chrome, and between 1.27 and 51 (!) on Firefox, on the same computer, the results are very noisy.


Chrome and Firefox here are an example for "Two users who use exactly the same hardware but different software".

To distinguish between users between of a larger set, you do more such tests and add them all together. Each test adding a few bits of information.

To make the above code more reliable, you can measure the ratio multiple times:

https://jsfiddle.net/dov1zqtL/

I get 9-10 in Firefox and 3-4 in Chrome very reliably when measuring it 10 times.


> Chrome and Firefox here are an example for "Two users who use exactly the same hardware but different software".

But it's also the most pathological example one can think of, yet the results are extremely noisy (while being very costly, which means you won't be able to make a big number of such test without dramatically affecting the user's ability to just browse your website).


It's possible to have the runtime execute the computations in fixed time across platforms.


Sure. And nobody actually wants that, because it would be so restrictive in practice that you might as well just limit yourself to plain text.

The horse bolted long ago; there's little sense in trying to prevent future web platform features from enabling fingerprinting, because the existing surface that enables it is way too big to do anything meaningful about it.

Here are a couple of more constructive things to do:

- Campaign to make fingerprinting illegal in as many jurisdictions as possible. This addresses the big "legitimate" companies.

- Use some combination of allow-listing, deny-listing, and "grey-listing" to lock down what untrusted websites can do with your browser. I'm sure I've seen extensions and Pi-hole type products for this. You could even stop your browser from sending anything to untrusted sites except simple GET requests to pages that show up on Google. (I.e. make it harder for them to smuggle information back to the server.)

- Support projects like the Internet Archive that enable viewing large parts of the web without ever making a request to the original server.


This would essentially mean that every computation would have to run as slow as the slowest supported hardware. It would completely undermine the entire point of supporting hardware acceleration.

I’m sympathetic to the privacy concerns but this isn’t a solution worth considering.


The solution is to put unncesessary features like WebGL, programmatic Audio API, reading bits from canvas and WebRTC behind a permission.


Who decides what's unnecessary?


Everything that can be used for fingerprinting should be behind a permission. Almost all sites I use (like Google, Hacker News or Youtube) need none of those technologies.


Main thing that ought to be behind a permission is letting Javascript initiate connections or modify anything that might be sent in a request. Should be possible, but ought to require asking first.

If the data can't be exfiltrated, who cares if they can fingerprint?

Letting JS communicate with servers without the user's explicit consent was the original sin of web dev, that ruined everything. Turned it from a user-controlled experience to one giant spyware service.


If javascript can modify the set of URLs the page can access (e.g. put an image tag on the page or tweak what images need to be downloaded using CSS) then it can signal information to the server. Without those basic capabilities, what's the point of using javascript?


So CSS should be behind a permission?


CSS should not leak fingerprinting information. After all this is just a set of rules to lay out blocks on the page.



No video driver is actually going to implement fixed-time rendering. So you'd have to implement it in user-space, and it would be even slower than WebGL. Nobody wants that. You're basically just saying the feature shouldn't ship in an indirect way (which is a valid opinion you should just express directly.)


I don't mean to prescribe the way to stop fingerprinting, just throwing out a trivial existence proof, and maybe a starting point of thinking, that it's not impossible like was suggested.

Also, WebGPU seems to conceptually support software rendering ("fallback adapter"), where fixed time rendering would seem to be possible even without getting cooperation from HW drivers. Being slower than WebGL might still be an acceptable tradeoff at least if the alternative WebGL API avenue of fingerprinting could be plugged.


Could you explain what techniques would make this possible? I can see how it's possible in principle, if you, say, compile JS down to bytecode and then have the interpreter time the execution of every instruction. I don't immediately see a way to do it that's compatible with any kind of efficient execution model.


The rest would be optimization while keeping the timing sidechannel constraint in mind, hard to say what the performance possibilities are. For example not all computations have externally observable side effects, so those parts could be executed conventionally if the runtime could guarantee it. Or the program-visible clock APIs might be keeping virtual time that makes it seem from timing POV that operations are slower than they are, combined with network API checkpoints that halt execution until virtual time catches up with real time. Etc. Seems like a interesting research area.


>not all computations have externally observable side effects

You can time any computation. So they all have that side effect.

Also, from Javascript you can execute tons of C++ code (e.g. via DOM manipulation). There's no way all of that native code can be guaranteed to run with consistent timing across platforms.


Depends on who you mean by "you". In context of fingerprinting resistance the timing would have to be done by code in certain limited ways using browser APIs or side channels that transmit information outside the JS runtime.

Computations that call into native APIs can be put in the "has observable side effects" category (but in more fine grained treatment, some could have more specific handling).


I'm not sure what you mean. All you need to do is this:

    function computation() { ... }
    before = performance.now();
    computation();
    t = performance.now() - before;
(Obviously there will be noise, and you need to average a bunch of runs to get reliable results.)


In this case the runtime would not be able to guarantee that the timing has no externally observable side effects (at least if you do something with t). It would then run in the fixed execution speed mode.


Lots of code accesses the current time. So I think you'd end up just running 90% of realistic code in the fixed execution speed mode, which wouldn't be sufficiently performant.


Runtime doesnt have full controll but could introduce a lot of noise in timing and performance. Could it help?


It's hard to reason about how much noise is guaranteed to be enough, because it depends on how much measurement the adversary has a chance to do, there could be collusion beween several sites, etc. To allow timing API usage I'd be more inclined toward the virtual time thing I mentioned upthread.


I wish all information-leaky browser features were turned off by default and I could easily turn them on on demand when needed. Like, the browser could detect that a webpage accesses one of them and tells me that I am currently experiencing a degraded experience which I could improve by turning this slider on.


I've set up my Firefox with resistFingerprinting but without an auto deny on canvas access.

It's sickening to see how often web pages still profile you, but the setting seems to work.

Similarly, on Android there's a Chromium fork called Bromite that shows JIT, WebRTC, and WebGL as separate permissions, denied by default. I only use it for when broken websites don't work right on Firefox, but websites seem to function fine without all those permissions being enabled by default.

Competent websites will tell you the necessary settings ("WebGL is not available") so making the websites work isn't much trouble. I'd much rather see those error messages than getting a "turn on canvas fingerprinting for a better experience" popup from my browser every time I try to visit blogs or news websites.


Right. But I don't want to have to dig into settings hierarchies for those knobs. The threashold for that is too high and almost nobody will bother and do that. Something easier with simple sliders would be much better.


I believe the LibreWolf browser does this. It's basically Firefox with all the fingerprintable features turned off.



The trade-off for extracting maximum performance from a user's hardware is that it becomes much easier to fingerprint. Judging by the history of the web this is a trade-off that probably isn't worth making.


I track population frequency of WebGPU extensions/limits here: https://web3dsurvey.com. The situation currently is much better when WebGL1/WebGL2 but there is still a lot of surface area.


Interesting. The data shows that WEBGL_debug_rendered_info [1] which allows sites to know the name of your graphic card, is supported almost in 100% of browsers. Seems that better fingerprinting support is really a priority among all browser vendors.

[1] https://web3dsurvey.com/webgl/extensions/WEBGL_debug_rendere...


This is sadly a requirement for the time-honored game development tradition of "work around all the bugs in end user drivers", which also applied to WebGL while it was still immature.

At this point there's probably no excuse for continuing to expose that info though, since everyone* just uses ANGLE or intentionally offers a bad experience anyway.


Fingerprinting is a very difficult and unreliable way of identifying users. You would not bank on fingerprinting to protect your money. You cannot bank on it to protect user info. You can just wish that you are targeting the right person.


"..all the operations are specified to deterministically produce the same bit-exact results on all hardware..."

You have to block floating point calculations as well if that is your intent.


The animals already fled the barn on that one, WebAssembly floating point is not specified to be bit-exact, so you can use WASM FP as a fingerprinting measure (theoretically - I don't know under which configurations it would actually vary.)


As long you can keep it off or turn it off then I think this is a good option to have. I too would prefer to have the Web split into 2 parts, documents and apps , then I could have a browser that optimizes for JS , GPU speed and a simple safe browser for reading Wikipedia and articles.

I am sure there will be browsers that will not support this or keep it off so at worst you need to give up on Chrome and use a privacy friendly browser.


> all the operations are specified to deterministically produce the same bit-exact results on all hardware,

I want this so badly. A compiler flag perhaps, that enables running the same program with the exact output bit for bit on any platform, perhaps by doing the same thing as a reference platform (any will do), even if it has a performance penalty.


I'm surprised people accept non-bit-identical output. Intel did a lot of damage here with their wacky 80-bit floating point implementation, but really it should be the norm for all languages.


Why would I want bit-identical output? Genuinely curious.

I see there's some increase in confidence perhaps, although the result can still be deterministically wrong...


It's very hard to do tests of the form assert(result == expected) if they're not identical every time.

And it can waste a horrendous amount of time if something is non-bit-identical only on a customer machine and not when you try to reproduce it ...


Trying to reproduce is a good point, but at the same time it’s usually a pretty bad idea to do tests of the form assert(result == expected) with a floating point result though. You’re just asking for trouble in all but the simplest of cases. Tests with floating point should typically allow for LSB rounding differences, or use an epsilon or explicit tolerance knob.

There’s absolutely no guarantee that a computation will be bit-identical even if the hardware primitives are, unless you use exactly the same instructions in exactly the same order. Order of operations matters, therefore valid code optimizations can change your results. Plus you’ll rule out hardware that can produce more accurate results than other hardware if we demand everything be bit-identical always, it will hold us back or even regress. Hardware with FMA units are an example that produce different results than using MUL and ADD instructions, and the FMA is preferred, but hardware without FMA cannot match it. There are more options for similar kinds of hardware improvements in the future.


> Order of operations matters, therefore valid code optimizations can change your results.

This is exactly why optimizations that change the order of operations of floating points aren't valid! And many other optimizations, like (I learned this just recently) transforming x + 0.0 into x: those are not the same thing when x is -0.0. In other news, -ffast-math produces broken code.

Current programming languages enable writing 100% deterministic floating point code just fine (even with compiler optimizations, as long as they are not buggy). The trouble is writing cross-platform deterministic floating point code, that works the same in every machine, but with great care it still can be done, as in https://rapier.rs/docs/user_guides/rust/determinism/ (well this project does this for every platform that supports IEEE 754-2008)


Cross-platform bit-matching determinism is a tradeoff. It’s not a correctness or accuracy issue. It’s one of many goals one might have, and it comes with advantages and disadvantages. Like I pointed out above, you may be trading away higher accuracy in order to achieve cross-platform determinism. You also trade away performance almost certainly.

You say “aren’t valid” and “broken code” as though it’s somehow factual, when in reality you’re making opinionated assumptions about your choice of tradeoff. Those opinions are only true if you assume that only bit-matched results are “valid”. This hyperbolic wording breaks down a little once we start talking about the accuracy of floating point calculations and how bit-matching FP calculations on two different machines is just making two wrong values agree, and there’s nothing “exact” about it.

It is 100% absolutely fine to have bit-matching determinism as a goal, and I’m in favor of compilers supporting it. I’m not suggesting anyone shouldn’t, but I hope you recognize your language is implicitly demanding that everyone must care about floating point determinism just because you do. Some people have serious floating point calculations where they want cross-platform determinism, but -ffast-math exists precisely because many people do not need it, or because they simply prioritize performance over bit-matching, or because they engineered with epsilons instead of unrealistic expectations. There are good reasons why Rapier’s cross platform determinism is not the default, right?

Generally speaking, even the people who have strong reasons to want bit-matching results on different hardware, because they understand the nature of floating point and the reality of the hardware landscape, do not depend on it to be true, they still write their tests using tolerances.


It's not that hard. You'll just have to decide what level of accuracy you want to have.

Asserting == with floating point numbers is basically a kind of rounding anyway.


The easiest option to prevent fingerprinting is to disable WebGPU. Or even better, which is already the option for many today, use one of privacy focused web browsers instead of Chrome.

Meanwhile there is a large audience who will benefit from WebGPU features e.g. gamers and this audience is in the numbers of hundreds of millions.


Someone mean would say that this is not a bug, but a feature for the people who are paying for Chrome.


Google are the people paying for Chrome, they do not benefit in any way from this kind of fingerprinting. To the contrary, it decreases the value of their browser monopoly.


> Google are the people paying for Chrome, they do not benefit in any way from this kind of fingerprinting.

The largest ad company in the world 80% of whose money comes from online advertising does not benefit from tracking...


They don't benefit from fingerprinting, because the browser has all sorts of easier to track mechanisms available by default. Fingerprinting is for browsers that don't actively enable tracking.


Even supposing that Google do benefit from it in that manner, there would be far simpler ways for them to make fingerprinting easier. It's extremely unlikely that this is a significant motivation for adding WebGPU. Not to mention that a lot of the fingerprinting you can potentially do with WebGPU can already be done with WebGL.


Google has many hands in many pots. It's not that they are necessarily looking for easier ways to do fingerprinting. But they sure as hell wouldn't put up a fight to make it harder.


Then why does Chrome contain loads of features to make fingerprinting harder?


Chrome has to walk a fine line between what it does for privacy and what is says it does. So you have the protection against fingerprinting and at the same time you have the FLoC fiasco


The simplest explanation is that the Chrome developers genuinely want to protect privacy and also genuinely want to add features. Every browser has to make that trade off. There are plenty of fingerprinting vulnerabilities in Firefox and Safari too.

The reasoning here seems to be something like "Google is evil; X is an evil reason for doing Y; therefore Google must have done Y because of X". It's not a great argument.


I can only quote Johnathan Nightingale, former executive of Mozilla, from his thread on how Google was sabotaging Firefox [1]:

"The question is not whether individual sidewalk labs people have pure motives. I know some of them, just like I know plenty on the Chrome team. They’re great people. But focus on the behaviour of the organism as a whole. At the macro level, google/alphabet is very intentional."

[1] Thread: https://twitter.com/johnath/status/1116871231792455686


That whole Twitter thread says nothing about fingerprinting or privacy. The first comment is close to gibberish, but seems to be mostly about some kind of Google office development project in Toronto.

You are literally following the parody argument schema that I mentioned in my previous comment. You make some vague insinuations that Google is evil, then attribute everything it does to non-specific evil motivations. Even if Google is evil, this kind of reasoning is completely unconvincing.


> That whole Twitter thread says nothing about fingerprinting or privacy.

I should've been more clear. In this case I was responding to this: "The reasoning here seems to be something like "Google is evil; X is an evil reason for doing Y; therefore Google must have done Y because of X". It's not a great argument."

> You are literally following the parody argument schema that I mentioned in my previous comment.

Because you have to look at the behaviour of the organism as a whole. If the shoe fits etc.


Google doesn't benefit because Google has committed not to fingerprint for ad targeting, but their competitors do.


You've got it backwards.


Google Ads, 2020-07-31:

What is not acceptable is the use of opaque or hidden techniques that transfer data about individual users and allow them to be tracked in a covert manner, such as fingerprinting. We believe that any attempts to track people or obtain information that could identify them, without their knowledge and permission, should be blocked. We’ll continue to take a strong position against these practices. -- https://blog.google/products/ads-commerce/improving-user-pri...

Google Ads, 2021-03-03:

Today, we’re making explicit that once third-party cookies are phased out, we will not build alternate identifiers to track individuals as they browse across the web, nor will we use them in our products. -- https://blog.google/products/ads-commerce/a-more-privacy-fir...

(I used to work on ads at Google, speaking only for myself)


1. In the context of browsers Google's competitors are Safari and Firefox. And in this context Google is always consistently behind: either unwilling to implement the same privacy protections, or implementing them years later, or coming up with non-solutions

2. It's funny how you link to a Google propaganda piece on FLoC. Whereas Google's competitors (context: browsers) actually try to reduce fingerprinting, tracking, and thrid-party cookies, Google is trying to have the cake and eat it too with FLoC. Which was such a blatant attempt to keep fingerprinting and tracking alive that everyone immediately disabled it within months of Google's experiments with it.

Edit: Tracking and fingerprinting is Google's bread and butter, literally: 80% of its money comes from targeted advertising.


If you're trying to understand what Google's doing here and what their incentives are, it's important to distinguish between tracking in general and specifically using fingerprinting to track. They're very interested in showing people relevant ads based on their history, but only in ways where users have some control. With the traditional approach of third-party cookies, for example, the user can clear some or all cookies, open a private browsing window, or use extensions to limit what cookies are sent/received where. With fingerprinting, however, the user has no control: if I clear cookies I'll still have the same fingerprint, and I can't tell the web to forget me anymore. Same if I open a private browsing window, close it, and open it again. We started this thread with the question of whether Chrome adding an API that increased the fingerprinting surface benefited Google, and I've been arguing no: as shown in my quotes above Google has committed not to use fingerprinting.

Your (1) and (2) are about tracking in general and not fingerprinting. On (1), I agree that Google is behind. This is explicitly a strategy to (a) protect ads monetization and (b) avoid a situation where you turn off third party cookies only to have advertisers move to something worse (see: being anti-fingerprinting):

After initial dialogue with the web community, we are confident that with continued iteration and feedback, privacy-preserving and open-standard mechanisms like the Privacy Sandbox can sustain a healthy, ad-supported web in a way that will render third-party cookies obsolete. Once these approaches have addressed the needs of users, publishers, and advertisers, and we have developed the tools to mitigate workarounds, we plan to phase out support for third-party cookies in Chrome. -- https://blog.chromium.org/2020/01/building-more-private-web-...

On (2), while FLoC abandoned the successor, Topics, is still moving forward: https://developer.chrome.com/docs/privacy-sandbox/topics/ Note that unlike FLoC it only observes pages where the page calls "document.browsingTopics()". I don't see how FLoC or Topics represent trying to "have the cake and eat it too" -- they're explicitly attempts to move user interest tracking from the server to the client, to address some of the privacy issues people have with server-side tracking.

On "literally: 80% of its money comes from targeted advertising" that's wrong? The vast majority of Google's income is from ads, yes, but it's mostly from search ads, which aren't targeted.

(I used to work in this area at Google; speaking only for myself)


Of course they do, but they want to allow fingerprinting in ways that only Google gets the data (i.e. spying on chrome users)


It's the latter.


So someone can put javascript in a page to compute equihash, autolykos, cuckoo cycle, etc? Is there a way to limit this?


They could already do that though?


But not using the client side gpu!


I believe webGL "cryptojacking" as it's called, indeed is a thing. Not sure on prevalence though or to what extent this introduction makes it more viable for malicious actors.

I'm not sure if lots of hashing algos are gpu-ready or optimized either.


Sadly all of this comes at cost of necessaty to lear new webgl and new API whic isn't good. I would expect to have something similar to a Vulkan one. This is not good.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: