Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Both in the python version and so far in c++, I am using my own forward mode implementation in Numpy and Eigen, respectively. (Why? Well, it was easy, I wanted to learn, it’s been fast enough, and most critically, allowed me to extend it by using interval valued numbers underneath the AD variables) Here’s where I do something kind of funny In the AD implementation: Basically just write a class that overloads all the basic math ops with a structure containing the computations of the value, the gradient, and the hessian. The trick, if there is any, is to have the basic AD variables store gradient vectors with a “1” in a unique spot for each separate variable. (And a zero elsewhere). Hessians of these essential variables are zero matrices. Mathematical combinations of the AD variables automatically accrue the gradient and hessian of ...whatever the expression is. Lagrange multipliers are AD variables which extend the size of your gradient. Oh, and each “point” in, say 3D, is actually 3 variables so your space (and size of gradient) is 3N + number is constraints in size. Write a newton solver and you are off and running.

This would be pretty hefty (Expensive) for a mesh. I’ve used it successfully for splines where a smaller set of control points controls a surface. Mesh direct sounds expensive to me. I assume you looked at constrained mesh smoothers? (E.g. old stuff like transfinite interpolation, Laplacian smoothing, etc?). Maybe newer stuff in discrete differential geometry can extend some of those old capabilities? What is the state of the art? I have a general impression the field “went another way” but not sure what that way is.

As for the auto diff, I’ve also got a version that does reverse mode via expression trees, but the fwd mode has been fast enough so far and is very simple. Nice thing here is that overloading can be used to construct the expression tree.

Of course if you do only gradient optimization you may not need the hessian. It’s there for Newton’s method.



Thanks! I am pretty sure nobody does direct optimization on the mesh quality because it is hefty. I did come across a PhD thesis which was doing it for fluid structures interactions and his conclusion was it was inferior to other techniques. I have a few tricks which will hopefully make the problem more tractable.

I use FEMAP at my day job have found Laplacian smoothing and FEMAPs other built in tools have been wanting.

I am currently thinking that my goal is to try and use reinforcement learning to build high quality meshes. In order to do that you need a loss function and if you are building a loss function you might as well wrap an optimizer around it.


Huh, machine learning for high quality meshing sounds like a great idea! (RL sounds like turning this idea up to 11 — exciting stuff and best of luck!)

FEMAP Seems a hot topic these days. Some folks at my work are building an interface to it for meshing purposes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: