If we're talking about high reliability code, one thing claimed about Haskell, is "if it compiles, it has no bugs".
How close is it to the truth ? And how close are we to having that kind of capability for real-time programming(even assuming we're willing to forsake protability, community, and maximum efficiency to some extent ) ?
Haskell is better than most languages at achieving this but still far from the truth. Type systems help a lot but they can't help you in every way. I write Haskell commercially and I can tell you there are many times our test suite catches an issue not caught by the compiler. (In fact just today we have this bug in which a particular maximum function returns the maximum of all historical values of a particular time-varying quantity but forgot to include the current value. The test suite not the compiler found the bug.) But it's also true that with Haskell you can have a smaller and less extensive test suite to maintain the same level of confidence about your code.
Another thing is, if you want to write high reliability code in a general-purpose language, you have to have the discipline not to use all language features because a general-purpose language is by default not safe enough. This article basically outlines a subset of the C language for writing high reliability software, and I imagine if Haskell is used for similar software, a large number of language features and functions in the base library would need to be banned.
I doubt that statement can be true in any programming language. Bugs come in many shapes and a functional language can prevent a fraction of it. You can have logic errors, misunderstood requirements, wrong database queries etc, the list is pretty much infinite.
The https://en.wikipedia.org/wiki/Curry–Howard_correspondence says there is a correspondence between any logical statement and a type in a sufficiently advanced type system. So yes, there are languages aimed at eliminating logic errors, and Haskell goes pretty far(though its type system isn't quite advanced enough).
However, you can consider the formal specification of your intent (the type), to be an example of fully declarative programming.
Since you write your type without any care about how it might be executed -- the holy grail of abstraction -- you are less likely to make errors.
When using a type system that isn't capable of fully specifying what you're doing (i.e. Haskell), you are of course subject to making implementation errors within the range of possible programs that type check. But in practice it's usually enough to catch the sorts of mistakes that you're likely to make.
> However, you can consider the formal specification of your intent (the type), to be an example of fully declarative programming.
The vast majority of formal specification and verification tools (I believe Coq and Agda are the only exceptions, and they are rarely used in the industry) express the intent directly in logic rather than in the types (HOL in Isabelle and HOL Light, ZFC+LTL in TLA+ and maybe Scade, ZFC in Alloy, and a typed set theory in SPARK, I believe).
> But in practice it's usually enough to catch the sorts of mistakes that you're likely to make.
I think this claim is supported by little evidence. Most non-dependent type systems are extremely weak (or require cumbersome encoding) to express even all but the most trivial of properties (e.g. they can't even express that the value returned from a max function is indeed maximal, let alone more elaborate properties). Their expressive strength is that of a finite-state machine. How much does that prevent real bugs requires empirical study.
You need more than an advanced type system. You need a declarative constraint solving and proof system. Instead of telling the compiler how to perform a task, you would declare the assumptions and the desired relationships and then, with the help of the proof system, determine what implementation fullfills exactly those constraints.
An expressive and ergonomic type system goes a very long way into codifying that intent.
For example, it's quite obvious what the intent of this Idris function is: `f : Vect n (Vect m a) -> Vect m (Vect n a)` (where `Vect` is the type of lists of a given length).
Curry Howard just says that computation and proof simplification work similarly. Correctness has more to do with how rich your type system is and with how you take advantage of it during modeling (to rule out undesired program states as ill-typed)
I came here to read stories about floating point rounding errors and was surprised there is no mention of them. Maybe all failsafe code is done with integer math...
Haskell's type system protects from a certain class of bug, but it can't prevent logic errors - something like using < instead of > won't trigger a build error, but it certainly is a bug.
Sometimes you can express the relevant relationship in the type system. Just adding types to an existing program won't tell you much, but if you work with the type system it's possible to move a lot of your business logic there.
The stricter and more advanced a type system is, the more this comes close to the truth. Of course, no compiler can check that the function you implemented is the function you wanted to implement, but it can quite check that it can reliably compute the function you typed in.
The question hinges, on how much errors in your program express themselves as type errors too.
Often a lot closer than in other languages, but that's still not close at all. And it depends a lot on how careful your code is. If you want to rely on the type system more heavily, you have to do more work upfront. E.g., using a simple SQL library that just takes queries as strings vs building your queries in a type-safe EDSL. You can write sketchy Haskell just like any other language, but you're likely to be much more aware of it.
>And how close are we to having that kind of capability for real-time programming?
Last time I looked it seemed like real-time Haskell still had a fair way to go. Presumably because it has garbage collection, lazy evaluation by default and other things that make it hard to prove time bounds. It can also be really hard to optimize sometimes IMO. But I doubt the features that make Haskell reliable are dependent on the features that don't suit real-time programs. I don't know of any pure functional languages with similar type systems, though. https://en.wikipedia.org/wiki/List_of_programming_languages_... has a pretty small list.
These rules are not just about correctness in the sense that "This code's computations are accurate" but also in the sense that "This code's performance is constant and predictable". Haskell helps with the first sense, but not really the second, from what little experience I have.
What they're saying is that Haskel (and Scala, and others) have such good type systems your data should never be wrong.
That has nothing to do with logic, and so if your code should be `if (foo)` and you write `if (bar)` or `if (!foo)` there is no way for a type system to catch that.
Additionally, there are a lot of constraints you can't capture with type systems. For example: given type Foo, type Bar extending Foo, and types * extending Foo, my type should extend Foo but not be or extend Bar.
Even just with raw data, imagine if your function required and was only valid if you passed in a prime number, or an even integer between 2 and 22, or a "sentence contained of 3 or more words none of which are Chinese" etc. It gets hard very quickly.
How close is it to the truth ? And how close are we to having that kind of capability for real-time programming(even assuming we're willing to forsake protability, community, and maximum efficiency to some extent ) ?