Whoever said this has been proven wrong. This stuff works, it's just expensive. The only real question is: when will formal verification be convenient enough to be widely used in industry. I reckon that's 15-20 years away. Right now organisations like Facebook (http://fbinfer.com) and Microsoft (https://www.fstar-lang.org) use it, but sparingly.
Facebook's Infer is just an ordinary static analysis tool, albeit one inspired by separation logic. It is unsound and does not prove that code is correct.
The problem is most languages we use are not very safe, and which means if you have one unverified part in code, you can't prove code correct. I read a paper recently which tried to do modular verification of C by adding runtime contracts are boundaries[1]. Theory wise its sound, but it's really slow. You may even have to rewrite code certain ways to make it perform better.
It looks like this compiler never actually reports any errors. (It uses the Maybe type, so you just get nothing.)
I appreciate that it's a toy compiler, but I really wish people wouldn't leave that out. The quality of a compiler's user interface almost entirely depends on its error messages, so it's important to demonstrate how that's done.
Quite strange that the article doesn't mention CompCert though, which has an extensive bibliography:
http://compcert.inria.fr/publi.html
Edit: The person I quote is a researcher in formal verification, I think it's Gérard Huet from Inria.