Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe there are tools that will do that. My knowledge of the industry is about 5 years out of date, and I was no expert even then. But we had no such solution and, in fact, didn't use dynamic memory allocation at all nevermind newfangled gizmos like garbage collection.

In principle, yes I think it's possible... at least, I don't think it reduces to the halting problem. But it would be tricky. It would be relatively simple to reason statically about the rate of memory allocation (iterate through all paths leading to a 'new' operator), but for this purpose you care about cases where an object becomes garbage and can be deallocated. That occurs when the last reference to the object is overwritten or goes out of scope, which is not so easy to determine in the presence of aliasing.



In principle it reduces to the halting problem, but given that the domain we're discussing is real-time systems, you'd better already be dealing with programs that you know will halt (in the sense of producing an answer) in not just a finite amount of time but even below some particular bound! Once you've constrained yourself to those programs, you may well be able to get a general result... but as others have noted, really what we want is results about specific programs, and the halting problem has nothing to say there.

I don't know that there exists tooling to construct these proofs, beyond general proof-constructing support, but I'm not working in hard-realtime environments so I'm sure I'm not aware of everything going on either.


>It would be relatively simple to reason statically about the rate of memory allocation (iterate through all paths leading to a 'new' operator), but for this purpose you care about cases where an object becomes garbage and can be deallocated.

No I don't. All I care about is having enough free memory to make new allocations. I don't care how much garbage there is, I just care that when there's garbage it's being freed fast enough to support my allocations.


Just to state the obvious, in practice over the long term, you've got to be deallocating faster than you're allocating, or something nasty that will at the very least violate your performance constraints will happen.


Right. And my point is that the deallocation vs. allocation ratio is the only metric you really need. How fast garbage is made is completely irrelevant because over the long term it's bounded by the allocation rate. You don't have to solve the hard problem of figuring out how fast garbage can be made, you can solve the much easier problem of bounding allocation. And of course in either scenario you have to show that there are no leaks.


> [...] at least, I don't think it reduces to the halting problem [...]

It does. But the halting problem is solvable for specific instances, e.g. if you are writing the programmes.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: