I like that there is this left-to-right flow. I think it’s a bit nicer to read than if-let ordering where the pattern comes before the thing it will be matched against. I think it’s also good for ocaml-style constructor disambiguation, which tends to go in lexical order.
Another nice aspect of making guards a less special case is that it avoids complexities in deciding if a binding is unused. I believe this logic was a source of lots of compiler warning bugs in ocaml.
This syntax doesn’t seem to solve the following problem with matching where there are two paths to the same binding, (say you have an option in one branch but it isn’t optional in the other, and maybe you’d like to handle both cases with the same code. Currently you can do that with a match (match …) with … pattern.
I worry that the semantics around exhaustiveness and mutable values may be confusing, though I guess OCaml already has that problem:
type t = { mutable x : bool }
let n = function true -> 1 | false -> 0
let f t =
match t with
| { x = false } when ( t.x <- true; false) -> -1
| { x } -> n x * 2 + n t.x
What does t { x = false } return? Similarly if you changed the second case to be two cases instead of binding x?
As mentioned in a response to a sibling comment, we plan to support `or`, which should address the problem you mention. (If not, would you have an example of what you mean?)
> I worry that the semantics around exhaustiveness and mutable values may be confusing, though I guess OCaml already has that problem
We added pattern matching and exhaustiveness to Dart not that long ago, and dealing with exhaustiveness and mutability was a big concern since Dart (unlike more strictly functional languages) generally doesn't avoid mutability.
Our solution was that whenever a pattern accesses a property, the language implicitly caches that value. Any future accesses to the same property in that switch statement/expression use the previously cached value. That way, an entire set of switch cases is always operating on an immutable snapshot of data so that exhaustiveness can't be violated by side effects. Spec:
C# pattern matching gets very close. I think C# can do everything except for their last example ("splitting conditional prefixes in arbitrary places"). One of the things I miss when switching to Rust.
if foo(args)
== 0 then "null"
|> abs
> 100 then "large"
< 10 then "small"
else "medium"
That last syntax took me a while to parse in the paper, but I can imagine numerous places in our everyday code where such syntax would more concisely capture intent.
Kudos to the authors. It takes some fortitude to try to make a contribution to something as fundamental and ubiquitous as the "if" statement. Appreciate providing the grammar and the thorough explanations, will save a lot of pain when trying to implement!
> Below, the code on the left is equivalent to the more verbose ML expression on the right:
Wait a sec ... ML has the pipe of Elixir? Damn, I didn't know that!
I like the brevity, but I don't like the (at least visual) need to indent or whitespace focus. I guess I am too much a sucker for S-expressions now. But a similar structure could probably be written as a macro in a lisp, avoiding any whitespace focus or even dependency on it.
def getNaturalName(tag, globals)
if globals.lookup("0") is Just(Global(_, _, term))
if term is Numeral(_, type, _)
return Just(Name(getTermTag(type)))
error showSyntaxError("0 must be a numeral to use numerals", tag)
return Void
Though this ultimate conditional syntax is more general because lambda zero only allows one destructuring per conditional to simplify parsing.
Imagine Left and Right contained data of compatible types, then I'd like to extract that data (regardless of tag) like so:
if x is Left(y) or x is Right(y) then ...
That way I only need to write the `...` part once. (This could be combined with the rest of the pattern matching machinery in interesting ways, and would probably need to have eager matching / short-circuiting semantics in case both cases match).
We definitely want to get into that! Unfortunately it's not completely straightforward. A simple desugaring doesn't work due to our support for intermediate bindings and computations, which we don't want to recompute.
Basically you want to apply one of N functions to each of the items in your data iterable, so you have a predicate to figure out which function to apply first. The degenerate case is when you have just 2 functions and your predicate returns a boolean (0 or 1).
I don't think that this is `with` (which is really a series of nested `case` statements with exactly two paths per case).
case is_email_address?(email) do
true ->
case String.length(code) == 6 do
true ->
case EmailConfirmations.get_email_confirmation(email) do
%EmailConfirmation{} ->
case EmailAddresses.get_email_address(email) do
nil ->
case Users.create_user(data) do
{:ok, user} ->
case EmailAddresses.create_email_address(user, email) do
{:ok, email_address} ->
# success block
fail -> fail
end
fail -> fail
end
fail -> fail
end
fail -> fail
end
fail -> fail
end
fail ->
case fail do
# else conditions
end
end
Elixir will be able to do something closer to `case` exhaustiveness checking with the gradual typing being built, and dialyzer can sort of perform exhaustiveness checking as long as your type specs are written well enough. (I’ve observed two things about `with` statements in Elixir: they should have more than one condition clause, or they should be written as `case` statements; they should not have an `else` clause, as that suggests that error path normalization isn't happening at the right level.)
In many ways what's described is much closer to `cond`, except that `cond` is completely open ended except for the required `true` clause, which means that there is absolutely no exhaustiveness checking.
I'll admit that I mostly skimmed this and I don't use ML or Haskell, but I couldn't see what benefit this provides over Elixir's `case` or Rust's `match` (recognizing that the former can't do exhaustiveness checking as yet, but `match` absolutely can).
They should. This nesting is why `with` exists, but even if `cond` and `case` could test for exhaustiveness (they can't currently, and I don't think that `cond` ever could), `with` would not be able to test for exhaustiveness, since there's always two paths described.
Kudos for publishing in OOPSLA, but alas I'm skeptical. This seems to add visual clutter/mental overhead as one needs to parse a decision tree rather than having a "flat" sequence of patterns with the optional when-condition.
Also,
> all the branches in the corresponding pattern matching expression would
need to destructure the pair, even when only one of its components is needed
Can't one just bind the unneeded component to _ ? Isn't that actually a good thing, as it clearly self-documents we won't need that value on that branch?
With traditional matching there are up to five different things:
- if x then y else z. This is roughly like a match with a true and false case but it depends on the language how much that is
- match e with { p -> e }. This is the classic pattern match case
- if let p = e then x. This is roughly equivalent to (match e with p -> x | _ -> ())
- match e with { p when e -> e }. This is matching with a guard but I’ve counted it as a special case because it doesn’t easily designate into a match because of the binding/evaluation order, and the guard is special because it can only go at the top level of the match clause so it isn’t just a special kind of pattern (though maybe it could be)
- let p = e. This is one-case pattern matching used for binding variables or destructuring.
The paper proposes a way to make the first four cases obvious parts of one more unified thing, which makes the language potentially simpler and may reduce some weird warts like where guards can go.
There's such a thing as too high an abstraction. Sometimes 4 different operations really are just 4 different operations and not cases of some mega-construct.
After a few years working in Haskell as a hobby, I came to the conclusion that complicated pattern matching is an antipattern.
I didn't think of it this way at the time, but my current belief is that complicated pattern matching is an extremely tight coupling to what is being matched, and that's not a good thing.
I'm not convinced super powerful pattern matching is even a good idea. I think a lot of the love people have for it is precisely their joy at being about to introduce tight coupling with such syntactical convenience. Syntax affording tight coupling is not a good thing.
(Simple pattern matching, especially on branches of a sum type, is fine and useful, especially when you are doing it precisely because the code will need to be changed if the sum type adds or subtracts other branches, and this need comes from something "real" and not merely false coupling. But the deeper in you go with the pattern match the more whatever it is you are reaching in for should be abstracted out into a function.)
I agree. I'm generally bearish on any language constructs that are difficult to implement. If it's difficult to implement, that probably means there are subtle edge cases that will manifest in confusing behavior at both runtime and compile time.
Sometimes a simple conditional statement grows into something more complex over time and it's nice to keep the same malleable syntax, particularly (as is the case of the article) with good semantic and performance guarantees.
Inspired by duality, I've been trying to work out a language where there's a more obvious correspondence/symmetry between expressions (which evaluate in an environment of named bindings to produce an anonymous value) and patterns (which destructure an anonymous value to produce an environment of named bindings).
In theory, yes: I've used it to derive a (minor but accepted as published) patch to a Turing Award winner's paper; in practice, no.
[the semantics haven't changed much over the past years, but the syntax? even as a single person, who ought to be able to come up with a unified design, my different temporal instantiations insist on bike shedding]
I like overloading most of the time but prefer specialized syntax for the various conditional idioms. Encapsulating it all in one keyword makes "if" harder to parse.
Not the implementation, I think one could do better, but the fact that they identified an interesting opportunity: coming up with the Ultimate Conditional Syntax for pattern matching.
I would love to see them take one more crack at it, and this time try to think about how to reduce the syntax to the bare minimum.
Note that you've saved exactly no keystrokes, although this is composable into expressions (if you are using some Javascript-derivative; in ML it would typically already be using the if-form).
Another nice aspect of making guards a less special case is that it avoids complexities in deciding if a binding is unused. I believe this logic was a source of lots of compiler warning bugs in ocaml.
This syntax doesn’t seem to solve the following problem with matching where there are two paths to the same binding, (say you have an option in one branch but it isn’t optional in the other, and maybe you’d like to handle both cases with the same code. Currently you can do that with a match (match …) with … pattern.
I worry that the semantics around exhaustiveness and mutable values may be confusing, though I guess OCaml already has that problem:
What does t { x = false } return? Similarly if you changed the second case to be two cases instead of binding x?