I've posted this on HN before, but in the world of optimization, I really like the book Convex Functional Analysis by Kurdilla and Zabarankin. It's the only book that I know of that gives a comprehensive treatment of when a solution to an optimization problem exists and can be attained. Essentially, the entire book builds up to what they call the Generalized Weierstrass Theorem, which gives the appropriate conditions in a few different ways.
Anyway, I work on optimization solvers and this is a big deal since there's always a question in whether or not the problem we pose can really be solved. Numerically, the solver can churn forever and maybe it's just hard, or maybe there's no solution. Outside of LPs, it's really hard to tell. Even convexity isn't enough for a solution. For example, min exp(-x) doesn't exist even though exp(-x) is strictly convex. There's an infimum, however.
Mostly, this is a way to say that I agree that the theory matters. I like the book above for establishing these conditions in optimization.
Edit 1: Does anyone know a good, nonnonsense book for establishing similar conditions for either ODEs or PDEs? In case it matters, I prefer looking at things from a general functional analysis/operator point for a view.
> Also, it’s not obvious from looking at the equation that there should be a problem at t = 1.
I found this curious, it seemed obvious from the equation there is going to be a problem. If you think about it, u'=u^2+1 has the solution tan (up to some constants), which is singular. So a function that satisfies y'=y^2+t^2 will satisfy y'>u' for t>1, and so should also have a singularity, you just don't quite know where it's going to be.
Everything is obvious once you know about it. But be honest--if you looked at that equation in the course of your daily work (rather than in a blog post about solutions that don't exist), would you immediately think "well that's gonna blow up real quick"? I sure wouldn't.
Yes, I would, that was the point I was trying to make, it looks like it has a singularity. In my comment I was trying to explain how one might be able to spot it in advance.
I guess. It doesn't jump out at me the way that 1/x or tan(x) would though. But I haven't had a lot of reason to mess with differential equations since undergrad, so maybe I'm just rusty.
> When I first saw this example, my conclusion was that it showed how important theory is.
He drew the exact opposite of the conclusion I see here. This seems like the perfect example for arguing why theory isn't important. Anybody who knows nothing about the theory could still tell that there's no solution -- precisely because of the very observation that lowering the step size (error tolerance) doesn't stabilize the graph and keeps causing larger and larger changes. If you had a solution, lowering the step size would eventually make a vanishingly small difference, and it's clearly doing the opposite. You don't need Picard–Lindelöf to tell you that.
Isn't the purpose of the example to make it hard to just trivially deduce y? If you know y = 1/(1 - t) then yeah it's undefined at t = 1. The example given doesn't seem to lend itself to that though, and a lesson is to know when your tooling isn't good enough.
Anyway, I work on optimization solvers and this is a big deal since there's always a question in whether or not the problem we pose can really be solved. Numerically, the solver can churn forever and maybe it's just hard, or maybe there's no solution. Outside of LPs, it's really hard to tell. Even convexity isn't enough for a solution. For example, min exp(-x) doesn't exist even though exp(-x) is strictly convex. There's an infimum, however.
Mostly, this is a way to say that I agree that the theory matters. I like the book above for establishing these conditions in optimization.
Edit 1: Does anyone know a good, nonnonsense book for establishing similar conditions for either ODEs or PDEs? In case it matters, I prefer looking at things from a general functional analysis/operator point for a view.