Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
React Renderer for Three.js (github.com/pmndrs)
213 points by oleksiitwork on Aug 15, 2021 | hide | past | favorite | 131 comments


When AR/VR finally happens, UI developers will have to deal with complexity from a completely different paradigm. For me, React's biggest strength has always been its ability to organize complexity into a manageable order. Combine this with the large pool of developers and extensive ecosystem, I think React will be the go-to tool for AR/VR apps. For this reason, I'm super hyped for R3F.


I really don't think React will be the go-to tool for VR; it's based on the DOM and trees of function calls, which are both hierarchical, which necessarily means you have the gorilla-banana problem.

If you have a coffee cup on a table in VR, is that coffee cup a child of the table? How do you move the coffee cup off the table and put it onto another table? Is it now a child of that other table? What about the coffee in the cup? Is that a child of the cup? How do you change properties of the coffee without necessarily accessing the table and the cup?

Developers working on 3D systems have developed much better paradigms than the DOM for dealing with this problem. An Entity-Component-System architecture with "constraints" is the current best solution. In that architecture, you would create a coffee cup "entity" with a mesh "component" with another "constraint" component, constraining that coffee cup to the table (or better yet, mass component acted on by a physics system). Then you can simply remove the constraint component when removing the cup from one table, and re-add the constraint component when adding it to the other table.

Overall, I think web developers are in for some intense learning and paradigm shifts if 3D becomes the norm.


I don't see why an ECS would be incompatible with a DOM tree.

As for the gorilla-banana problem, I would think all objects in a scene would be under the root, with the exception of pieces that make up a thing and rarely separate (wheels on a car, for example).


While not incompatible with ECS, the DOM and this renderer go all-in on the javascript event-loop. You would have to write your own run loop, which executes the systems on every frame (ideally creating a DAG and executing in parallel while possible), and leave the event loop behind, with all the niceties like `onClick`, to go full ECS. Otherwise you'll create some Frankenstein monster of part ECS, part event-loop, part declarative React.

Additionally, you can throw OOP in that mix as well, because Three.js has it's own whole OOP-style framework, that you're strapping declarative React on top of with this renderer. Reminds me of Jonathan Blow's talk on the end of civilization via endless layers of abstraction[1].

I really think, when it's ready, a Bevy[2]-style system either native or compiled to WASM with WebGPU will be ideal.

And while I'm airing opinions (forgive me), I think writing shaders now is like SQL 30 years ago. Developers left optimizing difficult--according to them--SQL to database administrators by abstracting it away into ORMs. If history is any indicator, I think we'll be having the same arguments on Hacker News 30 years from now about 3D frameworks vs writing shaders directly as we're having now about ORMs vs writing SQL directly.

[1] https://www.youtube.com/watch?v=pW-SOdj4Kkk

[2] https://bevyengine.org/


It's not that it's incompatible, it's that when the ECS is the primary tool for organization, a DOM tree (or scenegraph) is merely one way of iterating over the entities - not the way.

This provides tons of benefits, so for example you can also decide to iterate over the entites by shader program and gain significant speedups for graphics processing, or maintain components that roughly sort them by their position in world space for physics and culling or lighting, etc.

For a crude analogy, imagine if Document.querySelectorAll() were a zero-cost abstraction, i.e. it ran as fast as iterating over linear memory. In practice this isn't how it turns out with an ECS, but it's much closer and you can get this kind of performance for the "hot path" kind of queries.

To add to the sibling comment, there's another wonderful Rust ECS called shipyard[0] and I helped write a scenegraph for it (which I really need to update, one of these days)[1]

[0] https://github.com/leudz/shipyard

[1] https://github.com/dakom/shipyard-scenegraph


React is not based on the dom, R3f merely expresses regular threejs which works as an object graph. Three is the usual choice for 3D on the web, if you use it once you'll see that it is also quite natural. There is no conflict between the two and react certainly doesn't change any rules or apis, it just builds a graph, which you would normally form imperatively.


React is inspired by the DOM and they split it before 1.0 IIRC, but that misses the forest for the trees. The main issue I had is that React, Three.js, and R3F are all hierarchical/tree-like (what you and Three.js are calling a graph). You can technically yes, build 3D scenes, but anything non-trivial will be very awkward.

Let's say you're building a game where you want a sphere to stick to whatever player you throw it at. How would you do that with a scene graph/OOP model? It'd be awkward, removing objects from one parent and adding them to another. Even more awkward if it's a complex object and you only want a part of that complex object to stick to the player. ECS + a constraint or physics system does a decent job (not perfect) of handling this in a relatively elegant and performant way.

I've used Three.js enough--built my portfolio[1] out of it, and then switched to Babylon when I realized how little I liked Three.js. For the record, I also dislike Babylon.

[1] https://tuckerconnelly.com


i have yet to encounter something that shouldn't be expressed as a graph. three, babylon, ogl, blender, gltf, cad, games, they're all scene aligned. that doesn't seem to be a conflict since you still use shaders, physics, ecs and so on.

could you go more into detail what you mean when you say "anything non trivial"? is there a real example of something that would not be possible to create in, say, threejs?


https://codesandbox.io/embed/simple-physics-demo-with-debug-...

There's some excellent demos of how this works with this library with a full physics engine. Loads great even on my phone.


Nothing in R3F prevents you from organizing your scene like that


AFAIK Unity has scene tree(items on table belong to table for example), so it seems compatible. Usually 3D transforms work on scene subtrees.


Aren’t nested coordinate systems a quite natural match for a DOM-like tree structure?


As a counterpoint, I've enjoyed building VR on the web using C++ (with WebAssembly/WebGL/WebXR) and not having to touch the DOM or JS, see: https://twitter.com/nobbis/status/1425266634982248451

Benefits include complete control at frame and pixel level, being cross-platform (same code runs on web, iOS, macOS, Linux), and having access to third-party C/C++ libraries for 3D graphics.


I have not put a lot of time into learning WebAssembly. But isn't WebGL a JavaScript API? Meaning wouldn't you be going from WebAssembly -> JavaScript engine -> WebGL? I was under the impression WebAssembly had no access to the outside world and could only access the relevant JavaScript APIs. But if it is true you can basically do WebAssembly -> native GL then that would be amazing.


You're correct. WebGL does require extra validation compared to native GL, but it's effectively the same API as OpenGL ES 2.0/3.0 and Emscripten handles the translation from C/C++ for you.

There's some overhead but it's negligible (assuming you're not making overly redundant API calls.)


And that last part is key: for modern high-performance graphics acceleration, the name of the game is "maximum throughput with minimum API interactions."

If your data isn't structured for fast rendering, it doesn't matter much what language you're using; they'll all be too slow.


This is how Figma does it


emscripten ships a "desktop GL" emulation library [0], which can have quite a bit of overhead. If you want something faster, you can use the native WebGL bindings [1]

[0] https://github.com/emscripten-core/emscripten/blob/main/src/... [1] https://github.com/emscripten-core/emscripten/blob/main/test...


The "native WebGL bindings" still call into JS.

Desktop GL emulation is just a layer on top OpenGL ES if, for example, you're still using OpenGL's fixed function pipeline (deprecated 13 years ago.)


I wrote an opinionated WebGL wrapper in Rust/WASM here: https://github.com/dakom/awsm-web

The idea is that it does more cacheing of things like uniform locations and such so you do very fast in-memory lookups in WASM without hitting the JS Api as much.

In the future this will be obsolete since WebGPU has a more optimal API to begin with, and Rust/WASM won't need to go through the JS layer due to "interface types"


WebAssembly has no access to the outside world at the moment, that is correct. It is only able to call (and be called by) JS.

A C++ application compiled via Emscripten ships (a fairly large amount) of JS glue code that exposes all relevant Browser APIs like WebGL, Fetch or other HTML5 stuff to the actual WASM program. As others commented, for WebGL an additional API translation is applied. If the source targets OpenGL ES 2 (or 3 for WebGL 2), this step has almost no overhead however.


Not that large. The JS glue code for the VR web page referenced above is 48 KB, including WebGL, WebXR, Fetch, etc.


This looks really neat


React is one of the worst choices of doing something like that.

The underlying abstraction model of having a tree of components and re-rendering only the parts that have changed between renders doesn’t map to the hardware at all, meaning you’ll waste most of the HW performance just on maintaining the abstraction.

You’ll also get zero benefits from the third-party libraries - there’s nothing in them that can help you with stuff that matters, like minimizing amount of the GPU state transitions for example or minimizing amount of GPU/CPU syncs.

It will be scenegraphs all over again, and the graphics industry has ditched these long ago in favor of simpler models, for good reasons.

Long story short, the happy path in graphics programming is very narrow and fragile, and you typically want to structure your abstraction around it.


You are arguing against threejs not react. R3f reconciles threejs in the exact way it's getting used, a graph. This ofc is also how blender gltf et al work. If you make a webgl app on the web you most likely use three and all react does is make that a little ordered with some additional benefits when it comes to performance, memory and interop.


What is wrong with scenegraphs? And what is the graphics programming using instead?


Totally agree!

We switched https://flux.ai from vanilla ThreeJS to R3F and it’s been a huge productivity gain for the whole team!

Less code, that is more capable and more reusable!

In case anyone is interested, we are hiring: https://coda.io/@flux-ai/flux-jobs


Is this AR/VR? It looks like 2D circuit schematics.


It’s 2D schematics and 2D/3D PCB layout. All powered by r3f though

PCB layouting hasn’t publicly shipped yet though


Except that for three.js, it's react introducing complexity rather than improving the organisation of the code. A simple component with defaults look neat, but start building a complex scene and jsx gets in the way.

three.js isn't dom elements updated in js. The state of each object is updated in the scene depending on more than whether they changed.

Where three.js lacks abstraction is a component system, in plain js, to organise application with decent patterns. Most three apps are a big blob or code.


Okay yes, thank you from saving me from my drivel. Why would three.js care about representing a document model, bubbling up events, and so on. If we do this, we do it fresh.


I think they need to take a very serious and hard look at performance before it can go anywhere near VR (where rendering speed and stability are paramount). I'm sure it works for simple things and can handle GUIs fine, but the overhead seems huge currently.


there are many large scale apps built with it these days. it was initially made for complex use cases, to bring order into the scene graph, and of course to optimize raw rendering performance: https://docs.pmnd.rs/react-three-fiber/advanced/scaling-perf...


BabylonJS and PlayCanvas do it just right without jumping into React fashion.


Is there an error in the examples? You have

    const mesh = useRef()

    ...

    <mesh ref={mesh} ...

You'll be rendering an undefined element (before the ref has a chance to attach).

Also, the TypeScript example makes me head hurt.

    const ref = useRef(null!)
Ah, yes. A non-null null literal.


The TypeScript example does feel a little weird. I've used this library successfully with React + TypeScript with the following pattern:

  const mesh = useRef<THREE.Mesh>(null); // mesh has type RefObject<THREE.Mesh | null>
  ...
  useFrame((): void => {
    if (mesh.current != null) {
      mesh.current.rotation.x += 0.01;
    }
  });
  ...
  return <mesh ref={mesh} ... />

As others have mentioned, the ref will typically attach by the time a `useEffect` hook is called, but for safety it's nice to have the null check.


I was scratching my head over this one too. Other answers in the thread don't seem right. I then realized this:

    const mesh = useRef(... 
assigns an instance of something (`React.mutableRefObject`) to a variable in scope, but

    <mesh ...
after compiling into normal JavaScript and becomes

    (0, _jsxRuntime.jsxs)("mesh"...
(see `console.log(__SANDBOX_DATA__.data.transpiledModules['/src/App.js:'].source.compiledCode) `)

So that's why you're not seeing the editor or runtime complain about this. `<mesh` (lowercase), as with other lowercased react elements get translated into `Something('mesh'`, and doesn't require a defined component variable.

Think this:

    const div = 'not a div -- i am a string'
    return <div>{div}</div>
Just one of the many gotchas of many layers of JavaScript/TypeScript/React/etc... programming environments.


I’d answer the original question…

> Is there an error in the examples? You have

    const mesh = useRef()
    ...
    <mesh ref={mesh} ...
> You'll be rendering an undefined element (before the ref has a chance to attach).

…as follows:

1. On that last line, <mesh> is just a regular element[0].

2. Ref assignment makes this element’s instance accessible within component code[1] under a constant, here coincidentally (and confusingly, I must say) named “mesh”. The constant could’ve been named anything else, like “el”, and the example would work exactly the same way:

    const el = useRef()
    ...
    <mesh ref={el} ...
3. How come the constant in the original example does not shadow the equivalently named built-in JSX element? Well, there is a convention/hard-coded rule in React/JSX/TSX where it simply would not resolve lowercase identifiers to custom components. If we change “mesh” to “Mesh” in the original example, then the app would attempt to use the ref constant as a (malformed) custom component, and predictably throw.

[0] https://www.w3.org/TR/2016/CR-SVG2-20160915/shapes.html#Mesh...

[1] https://reactjs.org/docs/refs-and-the-dom.html#refs-and-func...


Excellent explanation, thank you!


this is how refs work. it's the same with a div. the ref will be filled in useEffect, not before.

null! is a common typescript thing, it is a semantic guarantee that the ref is static and hence will be available. this saves you the if (ref.current) { ... } check.


To clarify, I'm not talking about the ref _prop_, I'm talking about them using the ref as the element.


It’s lowercase, so it will act as plain-old tag. If the name would be uppercase, that would be a bug you described.


hm, interesting. I didn't know that -- I knew about custom tags / variable tags but never knew it _required_ them to be PascalCased. I feel like if I saw code like this come from one of my team members I'd ask them to change the name of the ref.


no ref is used as an element in that code.

    const ref = useRef()
    useEffect(() => console.log(ref), [])
    return <div ref={ref} />
will return { current: [dom node] }

    const ref = useRef()
    useEffect(() => console.log(ref), [])
    return <mesh ref={ref} />
will return { current: [mesh node] }


The ref isn't used in an effect, though, at least not in the provided code.


The useRef hook doesn't return a ref. It returns a function for setting the ref value. The value is in ref.current. The ref prop calls the setter with a ref to the element on mount.


I believe the `null!` is some trickery to let the rest of the code know that the `ref.current` will never be null, but with how React works you can’t really provide any other value. It prevents having to check for `null` everytime you address the value.

I don’t understand your first issue. The component is assigned to the ref, not the other way around, so why would it render `undefined`? It’s possible that `mesh.current` is undefined at first render, but that shouldn’t matter for the component, only for the reference.


Oops, I guess it's not undefined, but it'll attempt to render a ref object.

const mesh = useRef()

<mesh />

The underlying JavaScript (i.e. not JSX) on the first render , is React.createElement(mesh /* { current: undefined } */, { ref: mesh } )

In fact, I don't even think it matters which render you're talking about. It's still rendering a ref (again, { current: <undefined or a Three.MESH, I guess> }). That doesn't seem right to me.


Actually, the compiled JS is `React.createElement('mesh', { ref: mesh } )`. JSX compiler converts all lower-case names to strings. Only upper-case name (and composites like `rn.View`) are converted to variables.

The reason r3f uses lower-case names for built-in components is to distinguish them from your own components; from the documentation: "It merely expresses Three.js in JSX: <mesh /> becomes new THREE.Mesh(), and that happens dynamically." So whenever you see <mesh /> or anything else lower-case it is just an JSX alias for a Three.js core component. Note that they're not HTML tags in the resulting output; since they're rendered inside <Canvas />, they're transformed to Three.js automatically.


Ah, right. It's very obvious in hindsight. Thank you for clarifying.


Ah, right. But the mesh useRef is not wat is being rendered, the `mesh` component is more like rendering a div. If you open the sandbox and remove the useRef mesh and references it will still work.

Pretty confusing though, would have been better to call it `meshRef`. I also expected these components to be capitalised, but it probably makes sense in the way that in the dom renderer default components are lowercase as well.


yeah imo simply changing the variable name of the ref would go a long way, here.


I'm guessing they are overloading the ref's usage with the custom renderer. I believe that a renderer can accept non-function/class components in theory, which is why this works.

I don't really like this though, although interesting, it feels like a hack and might not integrate well with tooling (e.g. the Typescript example).

How I would have implemented it was exposition a Mesh component which wraps this, and a useMesh hook to wrap and type the ref.


r3f is a custom renderer, there is no difference in useRef between a div or a mesh, refs give you the underlying object. lowercase elements are native elements (div, span, mesh, view, box), they are defined by the renderer. uppercase is for components.


Ah, if they're using a custom JSX Pragma it might make sense. Still a little too bespoke for me.


no special jsx pragma needed. this is actually just plain react, <div> <span> and so on come from react-dom, which defines these elements. other renderers define theirs. here's a mini custom renderer if this interests you: https://codesandbox.io/s/reurope-reconciler-hd16y


Yeah, I don’t have experience with this lib, but the example code looks overly magic. But I’m also more of a backend dev, and find the useRef API to be inherently confusing, so that’s probably part of it.


I have to wonder: Have we gone too far? Is TypeScript becoming more tedious than helpful? Has React lost its way in its blind zeal for pure functions?

...no, no, definitely not. Must've gone crazy for a second there.


I think I blame JavaScript for this mess, not the other way around.


So react builds a tree of react elements that react dom turns into html. So this must replace react dom?

Reusable components are easy in a 3d engine like 3js. You can still program declaratively if you liked. It’s claim to outperform raw threejs is surely untrue. React is also bad for animations, and they recommend you sort of use reacts “back door” to do complex animations. 3d engines are all about animations. You could use redux easily without react if you liked. I bet you still have to learn a new API.

This doesn’t seem that useful. Am I mistaken? (Honestly curious)

That said, I bet it was interesting and pleasantly challenging to write


The creator of React-Three-Fiber has come out with some extremely bizarre tweets, like claiming updating 2,000 cubes at 60fps is an "impossible amount of load" [0].

The solution it uses for performance is... punting your calculations to the next frame if you didn't make it this frame. They call this "the scheduler". Turns out this makes no sense in any serious context if you want your 3D frame to be coherent.

I'm sure this is helpful for some people, but I can't rely on it when making serious applications. There's lots of things you can punt to the next frame, but ideally that should be decided per system, and not a global framework.

Keep in mind that rendering here is still happening every frame, so the only thing that's happening is 2,000 transform updates for cubes. Inexplicably, without the scheduler that punts updates to the next frame, it takes 700ms to do this [1]. For 2,000 matrix muls. That's 3ms per cube. What on earth is this doing?

[0] https://twitter.com/0xca0a/status/1199997552466288641 [1] https://twitter.com/0xca0a/status/1199997561358213120


The "No. There is no additional overhead. Components participate in a unified renderloop outside of React. It outperforms Threejs in scale due to Reacts scheduling abilities." made me do a serious double-take.

If you want to go fast in this space then you need to care about data layout and how the system is structured end-to-end. Calling a function per object is going to hit a wall regardless of how you schedule.

Hundreds/Thousands of updates is not small but it's also not massively impressive either. I've done ~2,800 node scene graphs on underpowered ARM chips back in '09 at 60FPS including rendering. You have to use NEON, and be aware of your caches. No scheduling magic is going to change that unless you're just deferring work which sounds like what may be happening there.

FWIW I've also done this in Java via FlatBuffers(which uses ByteBuffer internally) to keep data coherency when driving animations frames so it doesn't require dropping down to C/C++/Rust(although C#'s value types do make it easier).


I can't take a 3d rendering framework seriously if it gets bogged down at updating 2000 untextured and non-interacting cubes @ 60 FPS.

What on earth is all that time spent on? Doing ~2000 3x3 matrix ops should take a modern processor a few hundred usec surely?

To put the silliness of the 2000 number in context, look here at a showcase of the Unity ECS system from years (!) ago: https://software.intel.com/content/www/us/en/develop/article...

Their starting setup achieves 16000 textured and complex (relatively) models @ 30fps. Which is already much more than the "optimized" thing is achieving here. And it doesn't "cheat" by pretending to be fast via simply not doing to updates that are expected. And once they apply the various optimizations with memory layout etc... they get to 150000 (!) textured models moving about on screen @ 30 fps. So let's say 75000 @ 60fps, which is more than 35x as many objects, and the objects are much more complex.

Am I missing something? Why is "2000 cubes @ 60fps" extraordinary?


You have to remember that this 700ms overhead is not the renderer. It's just setting fields on the underlying Three.JS Cube objects. The Three.JS renderer runs fine, and I don't even consider Three.JS to be a fast renderer. So yes, the overhead of React-Three-Fiber's tree reconciliation is seemingly massive. No, I do not have any answers for what it is doing. Nor do I have any guesses.


Thanks for clarifying. 700ms overhead is massive. It sounds like the approach is just deeply suboptimal for any sort of high performance and complicated 3d scenes. Perhaps that was not their objective...


there were multiple versions of that test, the fist had nothing to do with the subject matter, the ones that people refer to (spinning cubes) had an artificial delay that was added to simulate cpu stress.


No artificial delay was added from what I can find. Here's the example from the tweet [0]. Feel free to tell me where the artificial delay is.

[0] https://github.com/pmndrs/react-three-fiber/blob/e3a71baad42...


we're running in circles unfortunately and i've explained where the test you're referring to comes from and what it meant. i've posted the real test and if you want, engage in it.

    async function test() {
      const chars = `!"§$%&/()=?*#<>-_.:,;+0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz`
      const font = await new Promise((res) => new THREE.FontLoader().load("https://raw.githubusercontent.com/drcmda/scheduler-test/master/public/Inter%20UI_Bold.json", res))
      console.time("test")
      for (let i = 0; i < 510; i++) {
        new THREE.TextGeometry(chars[Math.floor(Math.random() * chars.length)], {
          font,
          size: 1,
          height: 0.5,
          curveSegments: 80,
          bevelEnabled: false,
        })
      }
      console.timeEnd("test")
    }

    test()

    // To really drive it home you'd have to repeat it every two seconds ...
    // setInterval(test, 2000)
how react 18 concurrency works exactly, i think that's not the right place to churn through it. the react team has published tons of reading material as well as public talks.


It's interesting because in normal React usage, the overhead of generating and throwing away a bunch of objects on each update is dwarfed by the cost of updates to the DOM

But in Three.JS the actual updates to the tree should be cheap, right? There's no reflow, you're just setting values in memory to be used by the next render frame. If so, that changes the calculus.

There's also the fact that in an app, it's rare for actual state updates (and therefore React renders) to happen on every frame; usually it's only on interactions. Maybe the occasional animation (if it can't be handled by native CSS animations). Whereas in graphical contexts like this, it's much more likely you'll have lots of objects in continuous motion (and therefore continuous re-renders).

I can see the productivity gains being worth it for a lot of simpler use-cases, but I'm skeptical about the performance claims when you start to get into complex scenes with lots of entities.


Yeah, I can totally see the argument that it's a programming model that's well understood and you can get a productivity boost from that. However I don't think you can say that doesn't come with a cost, otherwise it would have been pretty widely adopted across the industry.


> otherwise it would have been pretty widely adopted across the industry

I'm not sure that's a fair explanation for why. It's totally possible to come up with new paradigms that are useful even though nobody's thought of them before.

I would think the main issue will be around JavaScript's tendency (cultural, syntactic, etc) to casually create and release objects all over the place, constantly. There's nothing intrinsically wrong with this, but it seems problematic for this use-case.

Example: JavaScript doesn't have named function parameters, because instead you just create and destructure an object:

  function foo({ param1, param2, param3 }) {
  }

  foo({ param1: 'a', param2: 'b', param3: 'c' })
The syntax encourages this, the React docs encourage this. JSX itself does this for every element you render. And for normal JavaScript usecases it works just fine. But when you're running this logic every frame, I would guess it will limit you at a certain point.

Despite that, I think people are onto something with the broader idea of coding a 3D scene declaratively. I'm just skeptical that React or its norms are the right path to doing it at scale.


Practically, if that function is somewhat hot, I wouldn't be surprised if V8 omitted the allocation altogether when generating optimized bytecode — internally, properties on objects already have an "order," so V8 could push each property into the stack in that order/reverse order (depending on calling convention). And if it doesn't yet, that's not a difficult optimization to make.


That requires substantial escape analysis, which I've noticed v8 does not handle too well. There are similar issues with the new for...of iteration protocol which makes a new object on every iteration, and from my experiments it is about 10-15% slower than a C-style for loop and generates actual GC garbage.


It seems like it would be easy to optimize, but it's one of those things where I know the V8 devs are much smarter than I am so I assume if they haven't figured out how to optimize it, it must be harder than it seems :P


Hierarchical component based designs have been around in game-dev for ages. I first used them in '05 but I remember prior art even before then. It was pretty common to have a declarative way to define components(usually through a scripting language like Lua or sometimes custom DSL).

I agree on the performance aspect, any inner-loop stuff always was down in a native language or heavily JIT'd path, but even then data layout drove it even more which usually required structuring the upstream systems ahead of the core logic. It's the reason why there's no "one-size fits all" game engine. They all make very discrete trade-offs in terms of entity counts, open world vs constrained layout and the like.


The claim that a general purpose web frontend framework adds no overhead to GPU intensive animation is indeed bizarre. The author seems to believe his own marketing that his framework is the end-all-be-all.


The “no additional overhead” struck me as a really high claim, and this confirms it. 2000 cubes at 60fps is really nothing if you know what you’re doing.


r3f does not introduce overhead. the readme text explains it all https://docs.pmnd.rs/react-three-fiber/advanced/scaling-perf...


I’ve drawn 100s of 1000s of objects per frame in the past. 2000 is nothing.


> updating 2,000 cubes at 60fps

Have we gotten so bloated with framework on top of framework that even this is deemed an impossible load?

If you write straight-up WebGL code 2000 cubes at 60fps should be a walk in the park for any modern PC.


threejs has no problem rendering hundreds of thousands of objects, so react doesn't have a problem either because its baseline is threejs, it does not introduce overhead or a performance penalty. These components render in a regular RAF loop, react is not involved at all unless you are mounting/unmounting objects, and this is where it can actually be faster.


Yea. This could be nice for small demos and toy apps.

But if you want to drive a canvas based render context. You're going to want to control the render loop for your own application.

Performance there is domain specific.


as absurd as it may seem to you, this is a react feature. the upcoming react 18 release is pretty much based on concurrency.

this is the test you are referring to: https://docs.pmnd.rs/react-three-fiber/advanced/scaling-perf...

the github repo now contains a vanilla test that you can run


As I said, "schedule" just means "defer work to future frames". That's a valid strategy if you have severe load, but it's not globally applicable: one might imagine that two components need to be updated together (e.g. updating the arms and body of a character should not be split up between two frames, or else the user will see split bodies for in-between frames). At the very best, this sort of scheduling should be informed by gameplay systems.

It can "effortlessly outperform" only if we're talking about throughput, not latency here -- the scheduler ensures that less is done each frame so we can meet 60fps each frame, at the expense of having things done frames later than when they probably should have been.

I'm aware this is all a React feature. I disagree with the React team that "concurrency" is a usable solution to performance in all cases. But I can respectfully disagree with them about that. I can understand why a scheduler helps improve user-perceived performance. I'm happy to talk about what I believe are the tradeoffs.

Regardless of all of that, updating 2,000 cubes should never cause 700ms of load to begin with. The test you linked me to is not the same test. The test in the tweet is seen here [0], and has no artificial runtime delay as far as I can tell. For extra irony, note that they already have to bypass React (which is described as ItemSlow), in favor of the "Zustand approach", aka modifying things imperatively.

Remember, Three.JS is already running every frame, and rendering every frame, with or without a scheduler. That means that the 700ms overhead has to be coming from the React / R3F part of the demo.

[0] https://github.com/pmndrs/react-three-fiber/blob/e3a71baad42...


what you link there has more to do with react vs zustand.

if it interests you, read up on react 18 concurrency, this is the bit you are missing in this discussion.


No; that's not the test in question. The tweet I started the conversation with [0] shows a bunch of cubes, not text geometry. It's not the text geometry test. It's just creating and manipulating a bunch of cubes. Unfortunately, it was removed from the react-three-fiber examples, so it can't be easily run unless you check out an old version of the repo. It contains two switches, one for React vs. Zustand, and the other for Concurrency On vs. Off [1]. The tweet is talking about the concurrency mode.

I'm very familiar with React 18 concurrency, and gave my detailed analysis of it. The documentation you linked even confirms my analysis of it deferring work across frames:

> it can potentially defer load and heavy tasks

I've already given my feedback about that approach. I heavily suspect the overhead here is all React's reconciler / differ, as Svelte, which has no scheduler, performs similarly to the concurrent React mode[2].

[0] https://twitter.com/0xca0a/status/1199997552466288641 [1] See the description in the panel here. It's talking about React 18 concurrency. https://github.com/pmndrs/react-three-fiber/blob/e3a71baad42... [2] https://twitter.com/Rich_Harris/status/1200805237948325888


i've written them. the first pits react against zustand, it naively lets react churn through the whole graph 60 times per sec, you wouldn't do that ever, but zustand could. i initially tweeted it for people interested in that lib.

some got it in the wrong throat, so i changed the test to actually pit it against a vanilla counterpart, and they were silent. the real test, contest that please, let the zustand thing go.


Ah, so you're @0xca0a. Sorry, I didn't recognize you at first. The examples and documentation I'm quoting at you are your own; my mistake.

That said, I don't know why you entered into the conversation by asserting properties about a different test than the one that started the conversation. I recognize that TextGeometry is slow and CPU-heavy, and that the deferred work approach has some value in helping to manage that, but the cube test we started the discussion with had no artificial delay as far as I can tell, nor do I see it really doing any CPU-heavy work. For cases without any obvious CPU-heavy work, where does the 700ms overhead come from?

> and it stupidly and naively lets react churn through the whole graph 60fps, something you would under no circumstance do

Iterating over 2,000 objects is something we do all the time in game engines (where I'm from and where I work). I've written particle systems that handle far more than that; I just checked the particle system for a game I'm working on, and the simulation time maxes out at <1ms, and I haven't even SIMD'd it, though it is somewhat SoA'd. React's inability to handle a low workload like 2,000 items is why I believe React is a poor fit for serious 3D applications.

And yes, I would argue against React 18 Concurrency's approach as a general design. Deferring work between frames is an incredibly valuable tool, but I don't believe it's something that validly can be done globally, or that it's a solid basis for a 3D framework. It trades low latency for high throughput, and I believe that's a tradeoff that should be made by the application author.

Multiplayer games and VR often require very careful controlled latency, and having unpredictable latency can make your game unplayable, or make someone incredibly motion sick.

We're clearly going in circles on this point; I'll shut up at this point because clearly we're at an impasse, and you're (completely validly) unwilling to discuss the cubes example that started things, which I would prefer to cross-examine in more detail. My approach to performance analysis is to profile things, understand bottlenecks, and come up with targeted fixes before going with more global approaches with large tradeoffs, so I'd be personally be more interested in knowing where the time is spent.

Finally, and this is my experience speaking as a game engine engineer, I think you should think more about how much work you expect to be able to do in one frame, and set your sights there. You can easily update 2,000 objects with minimal overhead.


i am sorry but there is a deep misconception here. of course you can update thousands of things, why wouldn't you.

try racing game for instance: https://twitter.com/0xca0a/status/1400164834243719173

or space game: https://twitter.com/0xca0a/status/1184586883520761856

you won't find react in the browsers perf readout except for short blips when the tree structure is changed. you do not update fast things with setState. since games are render-loop driven, you mutate, and that is also how r3f works.

as for react 18, yes, it works global. the entire component tree is virtual and can be prioritized. you could say, this thing back there is less important than physics driving my rocket, defer please. this will be an incredible tool for games going forward.


I've built "vanilla" Three.js projects, and worked with react-three-fiber. It is incredibly useful.

Having a new set of "primitives" that map to three.js objects and being able to render them in a React application makes so many things easier. It's also just way less code to write compared to standard three.js.

RE: animation performance, I haven't had any issues with it. I presume either they're doing a bunch of optimizations, or the concerns are greatly exaggerated. You can check out an app I built using it if you want some proof: https://beatmapper.app/


It’s just what you’re rendering is getting nowhere near pushing the limits of what can be done with the gpu.


It’s not mutually exclusive though

R3F gets you up to speed really quick and helps to easily onboard a larger engineering team and stay productive

At some point your scenes will become complex enough that you will want to start using instanced meshes or just move all critical things into shaders all together…and you totally can do that.

In our case r3f made these performance optimization even easier because it’s so modular


For some use cases, like charts and diagrams, even a piss poor performance is enough as long as it is accessible to developers.


if you have a threejs app that pushes the limits, the react counterpart will merely do the same, only with less code. threejs renders exclusively.


react has little to do with html. it just calls functions, what you see in react-dom isn't html either, these are nested document.createElement(...) calls. this can be configured for any platform, web, native or otherwise, so <mesh/> in this case is just new THREE.Mesh(). in react terms it's called a custom renderer.

the other thing you say, that react is bad for animations — r3f operates outside of react, there is no overhead, it actually quite easily outperforms threejs. but there are many other benefits, it will usually save you lots of code, and it can be more memory efficient, see this thread: https://twitter.com/0xca0a/status/1426924274527477764

the biggest advantage is the component model, because it allows for a true eco system, something that threejs does not otherwise have due to the lack of a common ground. and interop. every react library can now act on meshes and materials.


So again, I have to contest this. Not all UIs should be described in a tree structure. React and it’s whole diffing algo is to make sense of parts of the tree that changed.

That idea didn’t come out of the blue, it came from html, it came from the DOM.

If I give you a blank sheet of paper and say ‘write me a rendering api’, and you immediately reach for a tree structure, I’d be compelled to say you are influenced by html.

I feel like the watershed moment for us will be that moment in the Matrix where the kid tells Neo ‘there is no tree, there is no spoon’.

Edit (full quote from The Matrix):

Do not try and bend the spoon, that's impossible. Instead, only try to realize the truth... There is no spoon... Then you'll see that it is not the spoon that bends, it is only yourself.

We have been bending over for the DOM for quite some time now, I think it’s time we explore.

————————

Edit 2 (being rate limited), reply to some posts below:

Interestingly enough, ever major performance trick on the frontend requires breaking out of the tree structure. If you want a data table with a notable amount of rows, you have to break out and start defining heights and positions and calculate what to show manually. In other words, the free stuff we were supposed to get from the tree were not free.

My irreverence for the DOM comes from the fact that we have to negate it, ignore it, to achieve certain things.

So in the end I wonder, why bother with it at all?


Not all UIs should be described in a tree structure, but a tree is an isomorphism to a scene graph, and a scene graph is a very solid, traditional way to structure a 3D scene for rendering.


hard to read through all that tbh. i mostly do not understand what you are talking about when threejs is clearly a tree that react expresses with complete ease. groups in groups with meshes, which have materials, etc. it seems to me you have not worked with threejs before. r3f just expresses threejs, in the same exact same shape you'd have in an imperative app.

there are dozens, hundreds of demos now that are testament to this approach: https://docs.pmnd.rs/react-three-fiber/getting-started/examp... this is slowly becoming the norm of how you write a 3d app or game.


While I strongly suspect you’re right about React being influenced by the problem space, I’d contest the notion that trees aren’t useful for computational constructs in general. Trees are fundamental to the concept of a function call graph over time, for example.


In most production renderers, you have a soup of objects. Sometimes it's a tree, but much more often it's a graph of nodes pointing to each other.

One of the first steps of this is to gather all the nodes in the graph, flattening it into a list, and then start filtering and sorting it: for every opaque object, you likely toss it in Z-pre and opaque buckets (for objects visible through the main frustum), and a shadow caster bucket (per shadowed light frustum!). So the renderer itself really prefers lists of lists, it does not have trees internally.

React-Three-Fiber seems to take the DOM-equivalent tree, sort it, and then construct a Three.JS scene graph from it, and then Three.JS's render method starts from that scene graph soup and does the above. So one might argue that the tree is a DOM-like interface you're adding on top of Three.JS's scene graph.


it does not take a dom-equivalent tree, it just expresses the same code you would write out imperatively in a declarative way. threejs is a nested graph after all.


What is a “DOM-equivalent” tree? Do you just mean a tree?


I’m not sure it’s optimal from a performance perspective, but having everything represented by a tree is kind of nice.

I haven’t really encountered anything that can’t be represented well in a tree, especially with context providers in the mix.


The interesting thing about switching renderers is that you are now free of the DOM. I just think web developers only know one api really well (the DOM and it’s offspring frameworks) that when you give us a blank canvas (no pun intended), we resort to the same data structure and api of what we’ve always known.

We’re truly free to make a <Modal /> however we like. Perhaps not even in those tags. We just don’t know it yet.


JSX has very little to do with the DOM. It’s always been a language extension to provide render-agnostic expressions. Non-DOM renderers have existed for years (React Native, smart TV, CLI, PDF, PowerPoint, the list keeps going…).

The reality is it’s a good abstraction because it’s so decoupled from the DOM, but it’s associated with the DOM and web mainly for historical reasons.

It’s also worth noting that there are similar extensions to other languages that are similarly render-agnostic.


So I have a to contest that a bit. It parallels the DOM in structure entirely. You can see how this is the truth when you want to have components outside of a parent-child hierarchy belong to the same ‘component’, you have to use this very abstract concept of a Portal, as in, nothing is natural once you start thinking out of the tree structure.

Certainly you can map this api to a variety of other APis, which I’m sure is what they did with React Native, but it was built with the DOM tree structure in mind.

I think it’s one of the most brilliant things frontend has ever created along with Jquery, but they are both slaves to the DOM.


> So I have a to contest that a bit. It parallels the DOM in structure entirely. You can see how this is the truth when you want to have components outside of a parent-child hierarchy belong to the same ‘component’, you have to use this very abstract concept of a Portal, as in, nothing is natural once you start thinking out of the tree structure.

This isn’t a property of JSX, it’s a property of a virtual DOM. The former is typically used with the latter, but don’t have to be. SolidJS is an example of JSX without VDOM. It compiles to plain DOM operations, you only have a component tree during development. And like React, its compiler was designed to be render-agnostic and can be/is used in other environments.

It’s funny you should mention Portal here. I am working on a technique/hopefully eventual library to transparently (without requiring developer intervention) use Portals as a partial hydration solution—completely sidestepping the tree structure and only rendering interactive components. I believe it will be framework agnostic for anything that provides (or can provide) a hyperscript/createElement function and Portal-like functionality. And like those frameworks, it’s just a declarative data structure and can render to anything.

> Certainly you can map this api to a variety of other APis, which I’m sure is what they did with React Native, but it was built with the DOM tree structure in mind.

I’ve spent a lot of time (maybe too much time) looking at the underlying renderer abstractions of React, Preact, Solid, JSX Lite, several others.

The only one tightly coupled to DOM is Preact (and even that isn’t totally coupled). In fact that coupling is one of the major advantages Preact has in terms of package size.

React’s design has a separate renderer interface/abstraction that is primarily oriented around state reconciliation; the DOM implementation is just one of many.

Solid’s JSX (implemented in a Babel transform somewhat misleadingly called dom-expressions) is similarly decoupled from its DOM implementation with an abstract renderer interface. The difference is rather than state reconciliation it’s reactive.

JSX Lite’s render target is even more abstract, it renders an intermediate data structure that can be transformed to other component libraries, and even some design apps.

You can implement a JSX transform to basically any render target, without React. You can even skip the entire notion of state, events, interaction.

I even use it on my personal site to generate PNGs at build time! Unless you’re looking at my source code you’d never know it.


Correction: Solid’s dom-expressions compiler is more directly coupled to DOM APIs than I remembered. In hindsight this makes sense given the design, but I imagine it could provide the level of abstraction I misremembered. It would probably harm performance without some second-pass build time inlining.


Lots of things are already tree-structured though, including Three.js


We're using these at my job, and it's been a pretty amazing experience. We built a wrapper[1] so that every front-end dev can build components even easier (without needing to know Three.js), since in the end of the day they are just React components that can be easily composed together:

    <View3D>
      <Box position={[1, 0, 0]} color="green" />
      <Box position={[-1, 0, 0]} color="red" />
    </View3D>
[1] https://www.npmjs.com/package/@standard/view


For what its worth I've been very impressed and happy with what I've been able to achieve with R3F [1]

Its an online IDE for a number of CodeCAD packages to help lower the barrier to this paradigm.[2]

Going from knowing nothing about 3d, just hacking up example code it's been easy to put something respectable together without much dedicated learning. Im super grateful to the pmndrs team

[1] https://cadhub.xyz/dev-ide/cadquery [2] https://news.ycombinator.com/item?id=27649270


This is very cool. Sounds like there's room for performance improvements, but those can happen now that the framework exists.


How are they making custom Tags with lowercase names. I assumed react would render that as native tags.


this is a custom renderer. "react" does not know what a "div" is, this comes from "react-dom", that is why they have split these two packages apart.

but they also allow you to make your own renderer (https://github.com/facebook/react/tree/main/packages/react-r...) which then defines its own elements. you can try a mini threejs renderer here: https://codesandbox.io/s/reurope-reconciler-hd16y


VRML anyone?


you can't express a 3d scene with markup alone, that is why VRML didn't succeed. the JSX you see just masks function calls, it is not XML or HTML, but the true power is in components and hooks (useFrame, etc). a r3f component is self-contained, it will even subscribe to the render-loop. click into these two examples to see the difference: https://twitter.com/0xca0a/status/1426924274527477764


> you can't express a 3d scene with markup alone

What?


you want your view to do something, to participate in the render loop, to animate, user interaction etc. look at the example on the main page: https://github.com/pmndrs/react-three-fiber#what-does-it-loo...


So just do it with VRML5 like HTML5.

<cube id="myCube" x="0" y="0" z="0" width="10px" height="10px" length="10px" onClick="someFoo()" onMouseOver="someOtherFoo()" onTouchStart="someBar()">

<video src="blah.mp4" rotateX="45deg" rotateY="30deg" autoplay="true" loop="true" onClick="document.querySelector('#myCube').rotateX(45);">

...

No reason the above can't be done. But many people will come up with lame excuses about why we shouldn't have nice things.


i can't say why webgl took over, as well as threejs, but i like how close to the metal they are. i would also much rather have a lower level representation underneath instead of starting outright with a markup language. i think that's why vrml eventually faded. i generally don't see why 3d should be represented in html.

as for react, it merely expresses threejs, and adds something which three doesn't have: self contained components that are now sharable. something like this: https://twitter.com/0xca0a/status/1394697847556149250 just didn't exist in the web previously.


I mean, this looks like a nice, more modern approach - but vrml did allow full interaction? For something that works in current browsers, see eg:

https://github.com/create3000/x_ite/wiki/Sensing-viewer-acti...


There's always one of you. It's not funny.


One of those things that make zero sense but use react, and that makes them cool enough to blog about.


It has similar benefits as react-dom has for the dom. Less code, faster performance, less memory consumption, real interop with a growing eco system.


The age of WebGPU is almost upon us. Chrome 94 comes out this quarter. I doubt React will be the major player in this next epoch. It will be something new that can pull off an instant load shared world game.


In my experience ths issue with this has always been asset download speeds, not web frameworks. The lack of a storage solution means we can't ship game quality assets and have things be instant load.. I run a large WebGL application and the Chrome network cache is still my biggest issue ( e.g. this unsolved bug despite me having an 100% repro https://bugs.chromium.org/p/chromium/issues/detail?id=770694 )


Agree that the framework issnt the bottleneck here but asset loading is!

We have been investing in building ourselves a asset pipeline backend and it’s been a dramatic improvement.

Simply scripting Blender and other tools on a EC2 instance to optimize assets and then deliver them via cloudfront. Works really well


I think we are not verbalizing the elephant in the room. I know game developers have their ship tight and will use the gpu to it’s fullest (the proof is in the pudding).

The thing I’ve been trying say all along is, I want applications using the same renderer at a primitive level.


Yes, I agree... case in point - Unity's Wasm/Webgl exporter works really well for the most part, especially if you test and develop specifically for that target - but we don't see it taking off the way Flash did because nobody wants to wait 10 mins for their page to load.


Threejs already has a webgpu renderer.


And a gltf2/glb loader! Web frameworks will become game engines now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: