The answer to the problem can be found in the comments, but not in a single place:
X<T>{{}};
will:
for T = DefCtor call the initializer_list constructor with a single default constructed element. The outer brakets are for the initalizer list and the inner ones are for the DefCtor constructor. The brackets for T constructor itself are allowed to be elided.
for T = NoDefCtor, the previous overload resolution is not valid as NoDefCtor has its default constructor disabled, so the inner brackets match an empty initializer_list, while the outer ones the X constructor.
finally, for T = DeletedDefCtor one would expect the same as NoDefCtor, but there is a quirk in the language: explicitly disabling the default constructor still allows aggregate initialization (which is enabled for any class that doesn't have otherwise a constructor); this allows for the same overload resolution as in DefCtor. This is considered a defect and will be hopefully corrected soon.
While initializer_list allows for some nice syntactic sugar, I think it is now generally considered to be misdesigned, especially as it breaks the otherwise great uniform initialization syntax ({..}) that was added at the same time in C++11. There is an ongoing effort to fix the it, but it is hard to do without breaking backward compatibility.
And that fix is still far from fixing the language. I had tought that deprecating brace elision for initializer_list and always requiring double braces would be a workable, although backward incompatible fix, but it still cause issues. There was a recent thread on the topic on std-discussion recently.
Yeah, double-braces was also my first guess but found out that Clang and GCC handled it differently (!) so I didn't even try to understand what the std mandates.
Like "Wargames", the best way to win is not to play.
Between moving from MS to POSIX environments in the 90s, and the "death" of Borland, this left me with few options other than another Faustian deal with Java-land, alas.
I'm actually liking Javascript nowadays, though. Especially as more of an FP-capable language than an OOP-mandatory language.
That said... if you are grinding out games on a $200 piece of console hardware (a "toaster emulator"), C/++ is your gig. Possibly with a thin patina of scripting over the top of it, I suppose.
I'm loving the fact fact that languages seem to be moving towards some of the better parts of FP and away from some of the worse parts of OOP. I'm not particularly fond of fully FP languages but they have some really nice ideas that I've been enjoying using.
I agree. And for myself at least, I could also say:
I'm not particularly fond of fully OOP languages, but they have some nice ideas that I've been enjoying using.
If a thing I want to make is a stateless process, modeling it as a function seems nicest. If that function can be truly a function (i.e. pure), that's best. If it can be semi-pure, in the sense that its only access to mutation is via its arguments (e.g. it takes in a DB handle explicitly), that's alright. If it has other access to global state, perhaps reconsider.
If a thing I want to make is a container for state, and its state transitions have some structure to them, then I hear there's this thing called an ADT, and OO seems pretty nice.
Neither of those statements is an architectural manifesto, and OO design and functional design both seem like bad ideas to me. That is, requiring modules to be instances of language-syntax-level functions or language-syntax-level objects seems like a bad idea to me. I guess if you go read the plain-English descriptions of Smalltalk objects, that matches my idea of how modules should work, except when they should be functions in a plain-English sense.
FP and OO are completely orthogonal in my eyes. I can utilise immutable data with referential transparency just as well with Objects as I can with Modules + Structs, but only the latter is called "FP".
I think because Java and C++ championed a very impure OO, based around C-like imperative semantics, people regard huge bundles of mutable state as canonical OO. To me that's not the big idea, the big idea is Objects vs Modules + Structs.
I'm confused that you seem to think that "FP" means immutability. I think that most lisps and schemes, for example, have had mutable state. ML does. I think APL does? I think Erlang does, even though it says it doesn't? It's just a few Johnny-come-lately languages that do this immutability-is-the-default thing. Heck, even Clojure doesn't commit to it. Although I agree that lots of them discourage the use of mutability (including ML, APL, and Erlang).
To my way of thinking, OO means "big bundles of state, with really aggressive encapsulation". And FP means first-class functions, and more pipelining/chaining with less encapsulation. Smart data, vs functions-plus-dumb-data.
And so I think that OO isn't technically incompatible with immutability, but it fits better with FP. And OO isn't technically incompatible with purity, but it fits a LOT better with FP.
Functional Programming as I've always heard the term defined is the style of programming that avoids mutable state in favor of computation over immutable data structures. From Wikipedia:
> In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. https://en.wikipedia.org/wiki/Functional_programming
See also "total functional programming", which takes the concept a step further, and only permits programs that provably terminate (limiting the kind of state that can be manipulated even locally within functions).
My two pence: FP contrasts with the procedural style of programming, which focuses on mutating state, working with functions that have side-effects.
For example, in a procedural program it would be perfectly normal to call an operation on a file object, passing it a buffer object as input, while expecting it to fill the buffer with the file contents, and while returning no value and throwing an exception on error. In functional program this would be discouraged, in favor of something like an operation on the file that reads the file contents, returning a buffer as output.
I would consider both of these styles properly orthogonal to object oriented style, which is a style which permits there to be many different instances of a particular interface with different behavior. OOP focuses on identifying abstractions and implementing code in terms of abstractions. For example, both the file object and the buffer might provide the abstraction of being sequences of bytes. OOP allows a form of generic programming where I can write an algorithm that operates on all instances of the sequence-of-bytes interface without concern to which particular implementation the code is interacting with at the time. Code can be written in both an OOP and FP style if object methods avoid modifying object state.
In practice, however, a lot of object oriented code choose to assign objects local state that is manipulated via side effects from method calls. For example, a method on a string that modifies the string by appending to it is a method that follows procedural style. By comparison, a method that concatenates the original string with the input string and returns a new string is a method following FP style. Both approaches have tradeoffs. Object orientation means that there can be multiple different concrete string implementations complying with the string interface, that can be passed interchangeably to code written against the interface. The Unix file descriptor pattern is a form of object orientation, since file descriptors may refer to several different resource types supporting similar methods via syscall. Object systems tend to allow the creation of new implementations later without mandatory coordination with existing code using that object interface.
I've seen the WP page. If that's what FP is, then I think that means that either
* think Lisps and Schemes aren't true FP, or
* think that Lisps and Schemes avoid mutable state
Which one of those is your belief? I'm not a serious user of any allegedly-FP language, so maybe I misunderstand the history of FP. Note that I concede that Lisp seems to be less obsessed with mutating state than most procedural languages, but I think saying that it "avoids" it is an overreach.
Of course, some people could claim that "old" FP languages like Lisp and Scheme aren't truly FP.
As for your definition of OO, I think the word that describes abstraction across various instance-types is "polymorphism". That's when code can be written that handles multiple types, either because the typing system allows it, or because the typing system is so flexible that it fails to forbid it. Functional code is usually highly polymorphic, whether it's an aggressively static type system (Haskell with type classes) or an aggressively late-binding dynamic type system (typical Lisp, before anyone bothered to invent the phrase "duck typing").
Again, the central concept of OO, at least according to Alan Kay (the inventor/definer of the term), who said in 2003 (later than other definitions of his I could find) "OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things." I think it's pretty clear that statefulness is pretty essential to that definition.
Popular is not always (or even often???) the best way to go ;-) Being on the fringe usually means you've thought about it more than most people.
I'm an OO guy who has been moving more and more in the FP direction. I look at it a bit differently that you, but basically have the same conclusion (I think).
Rather than saying OO and FP are orthogonal, I tend to feel that OO and FP are essentially the same, but that popular OOP misses the point completely.
If I have a struct and a set of functions, where the first argument of the function is always the struct, how is that different than having an object with a set of methods? It might (or might not) be implemented under the hood differently, but whether I write the object on the left, or as the first parameter, it's all the same.
With respect to mutability, most FP languages don't allow mutable data structures. I can likewise restrict myself to immutable data structures in OO languages. Most people don't. Unless you have a specific performance issue in mind, I think this is usually a mistake. I think the fact that mutable data structures are popular, is not equivalent to saying the OO encourages mutable data structures. It's equivalently bad in both camps (which I suppose is the meaning of "orthogonal" ;-) ). Immutability has memory management considerations which I will discuss in a minute.
WRT polymorphism, with OO languages, ad-hoc polymorphism (operator overloading) is essential, while parametric polymorphism (generics) is optional. With FP, it tends to be exactly the opposite. But if you consider that both ad-hoc polymorphism and parametric polymorphism are useful tools in both approaches, I think it's kind of a moot point.
The rest of it boils down to coupling and cohesion. In OO, you have encapsulation which forces you to reduce your coupling in certain ways. You tend to have cohesive code in that all functions dealing with a particular data structure are grouped with that data structure. In FP, you have cohesive code in that all your functions doing a particular operation are grouped together. Coupling is not strictly enforced, but writers of good FP code enforce it themselves. But it's not like you can't do both styles in both systems. It's just that the language implementations make one style easier than the other.
The only place that I see which is very different is in garbage collection. If you are using immutable data structures, it is hard to know when to deallocate the memory. So you need some kind of garbage collection (even if it is just reference counting). Having it built in to the language makes it very convenient, but it is hardly a show stopper if you have to write your own (or use a library).
With C++ there is an idiom (which I still think is a good one) where you try to only allocate memory on the stack. Everything is a local variable and you send those local variables to functions that fill in the data for you. That way you can explicitly understand the lifetime of every data structure in the system. I've done a lot of embedded code where I needed to keep track of every byte of code and to know exactly where it is allocated and deallocated. Techniques like this help, but it is obviously not immutable. I don't know how to do this kind of thing with traditional FP languages.
But really, it's only the very last thing that strikes me as being particularly different. The rest is syntax and how much typing I need to do. It's more that the language designers have idioms that they prefer and make those idioms easier. FP and OO seem to share the same pool of idioms, though.
Yeah, I bought a copy, but never really used it. Too much work on unix systems for me by that time. TP for Win and (DOS) TP before that were pretty good, though.
C++ is hard. If one sticks to a strict subset it is a nice language. The problem is the code one didn't write. Even the STL is ugly to use. They are trying to fix the language by adding features but it's like Javascript, the old ugly ones are still part of the spec.
The worst part of the JS "enhancements" is trying to make it more like Java, rather than bringing it back to its Lispy + Smalltalky roots.
I guess the assumption nowadays by The Management is that programmers are too stupid to learn anything new delimited by other than curly braces, or learn how to backtrace data flow between functions, rather than debugging by property-setter breakpoint.
Can you give me an example of a "JS enhancement" you are talking about? Its not that I don't believe you I just am curious. I haven't changed the way I, personally, written Javascript since jQuery came out so I'm not up with the new stuff.
Yep. TypeScript !== JavaScript, but it's along a similar misdirection.
Then there's "let". I guess it's not hurting anything, but how useful an addition is it, really? How many nested, non function, blocks you got in your code, buddy, that you need a new scope for your nested for-loops and else-ifs?!?
Between things like forEach, map, filter, etc, I don't actually write very many manual loops anymore, and often use an IIFE for something that outgrows a ternary. Most of my "blocks" are functions.
The problem is that even if all team members across the company agree on the subset, it might change its meaning depending on which language version (ANSI) and compiler vendor/version is being used.
please, if you had 10 years of experience in the language, read every article on C++11/14/17, followed closely the evolution of the language, the answer would have been obvious...
... ok, even I can't say that with a straight face.
Back in the 90's I was working on an operating system kernel that was written in C++. I had read up on the language and had done a lot of compiler work, so I was kind of a local tutor. I'll never forget my boss' reaction when he overheard me tell a coworker "look, if you want to expose your private parts here, you'll need to make a friend."
I would not go so far to call it atrocious, but it has demonstrated how a collection of rules, each (for the most part) well thought out, can lead to a combinatorial explosion of complexity, together with some highly unintuitive corner cases. The strong desire for backwards compatibility all the way to C, together with some expedient principle-breaking, has made things more difficult. All of this has been thoroughly investigated and well-documented by Mr Meyers, and I would like to thank him for all his books and articles on the language.
This issue is what stopped me contributing to the C++ standard. We knew brace initialisation was broken before C++11 came out, even std::vector was a mess.
Yes, either feature in isolation would have been an improvement to C++, but their combination was a mess. Probably one of the features shouldn't have been added, or their combination needed some serious tidying.
C++ FAQs are full of massively complex conundrums like this for problems you just don't get in other languages. I used to love learning all these highly specific C++ rules and playing language lawyer when I didn't know any better and thought it was a good use of learning time to become an expert. I've since moved on to languages that weren't full of problems like this that let you just get on with what you're meant to be making. If Scott Meyers can't figure this one out what hope would anyone else?
> I've since moved on to languages that weren't full of problems like this
I think you mean aren't full of those problems yet. :)
There's a lot of nasty corners in C++ but almost all of them are because it effectively has 40 years of wild success. (I say 40 because C++ inherits C's baggage and success.)
It's totally reasonable to pick a younger language with fewer warts. Become an expert in it! Be productive! Ship lots of apps.
Then, one day, decades and several versions hence, you may find yourself now dealing with all of the warts that language has accrued over time. At this point, you're too invested in it to jump ship. The marginal cost of learning a new corner case is smaller than the cost to start over from scratch and learn 2046's new hot language.
Alas, expertise is hard. Everything you learn makes you better at X but slightly disincentivizes you to learn Y, which may not even exist yet.
Java is over two decades old and one of the most widely used languages in the world. It has absolutely stood the test of time. While it does has its pain points its nowhere near as terrible as C++ and (in my opinion) is improving with new versions, and honestly, I think its a joy to program in though that might not be a "cool" thing to say. Whereas C++ is a chore to program in.
That's my opinion, maybe I'm just too used to high level programming to be "good" in C++ but my experience is C++ breaks the "principle of least astonishment" every single time I use it. For example, a simple operation like clearing a stringstream is... not straightforward.
Well, there's not a lot of room for problems to crop up when all one can do is define classes :)
Still, here's a few for you: the design of Cloneable is considered to be a mistake, as are exception specifications. These have real negative consequences. File IO was laughably bad and completely unintuitive needing several classes for the simplest of tasks (perhaps this was fixed more recently, but funny you should mention streams).
Resource management was quite tricky to get right before try-with-resources and it still is if one wants performance. Memory leaks and resource leaks are still possible. Boxing and unboxing. The particularly weak type-erased generics.
Java's saving grace is that it's a very programmer-friendly language. No matter how much one screws up, the worst that's probably gonna happen is a null pointer exception with a nicely formatted stack trace attached. The tooling is top class. All of this means that anybody with a pulse can program in Java and get a mostly working program.
It won't be a joy to use, unless one is particularly impressed by classes, methods, interfaces in abundance, class hierarchies and factories.
C++ is not like that. If one doesn't use the safer modern idioms and makes a mistake it will crash (if lucky),or act in weird ways or corrupt something. Sometimes there is no way to avoid dangerous code.
There is no nice stack trace, you have to work for it. Probably have to stare at assembly and work with a fully featured but thoroughly painful to use command line debugger that's twice as old as Java.
But it's powerful and really fast. It lets one design creatively, because it supports more abstraction possibilities. And it's not owned by any corporation, it's an international standard that runs on most platforms.
I understand why people use Java. I use C++ because it's not just a tool that gets the job done, but also a way to challenge myself to make something that requires attention and skill. If I want to, it's me and the machine without framewerks, factories, fancy IDEs, dependency injection, package managers and virtual machines.
Like I said, Java has its pain points but Java's pain points are both being drastically improved on with time (java.time anyone?) and was never as bad (in my opinion) as C++. My main point is that Java has stood the test of time without becoming as bad as C++.
I'm not impressed by "classes, methods, interfaces in abundance, class hierarchies and factories." I find Java a joy to program in because I can just Get Shit Done™ and things do what I expect them to do.
In my old age I just want to get shit done, challenging myself is no longer very appealing. That's a personal preference though. There was a time that wasn't the case.
Obviously managed code is not for every application but its good enough for most. I am eternally grateful for C++ programmers for the applications where C++ (or other lower level languages) are needed because I wouldn't want to be doing that. I think C++ programmers, as a group, are so used to C++ idiosyncrasies that they don't see them as idiosyncrasies. That goes for any group though.
For what its worth my first language was C++ but C++ always "astonished" me.
(I've never had a problem with boxing and unboxing. Java has had autoboxing for a really long time so those days of manual boxing/unboxing are ancient history)
> Java is over two decades old and one of the most widely used languages in the world.
Half the age of C++ (and it's ancestor C which is it largely compatible with).
> It has absolutely stood the test of time.
Sure, it's held up really well. It takes advantage of the fact that baking in GC, memory safety, and platform independence sweeps a lot of C++'s nastiness under the rug. Of course, those choices are also what generally makes Java unsuitable for many of the things you can do in C++.
Even so, Java has certainly acquired it's share of cruft. Primitives, type erasure, checked exceptions, Uri.equals() hitting the network (!), nested classes, covariant arrays, arrays and lists being different and arrays getting all the nice syntax, etc.
Agreed but in my opinion the cruft in Java is not nearly as bad as the cruft in C++. That's a personal opinion though, I'm mostly a Java programmer in my day job so that may cloud my perception of the language because I'm so used to it. I've never fought with Java like I have with C++.
It's URL::equals that hits the network, not URI::equals; the latter is a lot more sane. And even URL::equals often won't hit the network, because Java caches DNS lookups forever.
I realise that i am not exactly disproving your point about cruft here.
Checked exceptions is definitely cruft. Java is also has the advantage of trying to be less things than c++. It's not a systems language and it's hardly even a desktop applications language anymore. Java has the same niches as ruby and python more than c++.
Because in practice they lead to exceptions not being handled properly with a lot of boilerplate try catches. If you can't do anything sensible with an exception (like retry a couple of times) then they shouldn't be caught. The other alternative is chaining throws on function definitions up the call stack.
Most new libraries don't use checked exceptions. They were a good idea in theory but not in practice.
What do you think of languages like Rust, Go, or Haskell which return errors as part of the return type? That's somewhat akin to having all exceptions be checked.
This is the exact opposite of checked exceptions, it returns an error you don have to handle at all.
It turns me off go and Haskell altogether because they are languages suitable to applications. I think it's going to lead to a lot of apps in the wild ignoring errors.
Rust is a bit different because it's a systems language, it makes sense there.
I don't think it's the exact opposite at all. It forces you to "catch" the exception, but doesn't force you to do anything with it, the same as a checked exception.
Like in Go I can do:
f, err := os.Open("foo.bar")
I am then free to ignore err or do something with it, but the type system forces me to at least acknowledge that it exists. It's the "compiler forces you to acknowledge the possibility of an error" that makes them pretty similar to checked exceptions in my opinion.
Yeah it's similar in that sense, but checked exceptions force you to acknowledge every type of exception that can be thrown, or one generic one. If you can't handle it then they force you to list every type of exception that can be thrown all they way up the chain.
Eh, from what I've seen, a lot of what people now consider to be the "warts" of C++ were there when I learned it in the '90s. Not all, obviously — but a lot of them, like the STL and all the relics of C, are quite old.
It seems similar to JavaScript. There were some initial good ideas and a bunch of early bad ideas and we're stuck layering improvements over that very uneven base.
> Then, one day, decades and several versions hence, you may find yourself now dealing with all of the warts that language has accrued over time.
What languages are getting even close to being as bad as C++ in this respect though? Java, Python and JavaScript have been around for a while now for example and they seem more opinionated in a way that stops too many warts from appearing.
C++ has undefined and compiler dependent behaviour in the core of its design which I think is the real problem. Its complexity just multiplies at an exponential rate as more features and undefined behaviour is introduced.
I would argue that there are rabbit holes in Haskell that can lead to baroque terrors of syntax on par with the C++ horrors in the article. The difference between Haskell and C++ is that Haskell code always has a well defined meaning even if it is _insanely_ terse. There is no ambiguity like with C++.
Undefined behavior is not specified, by definition! It is the absence of requirements, including the requirements to diagnose.
C++ has something called "unspecified behavior" also, which is stronger than undefined, because it gives requirements. It denotes some circumstances when an implementation can behave in one of a finite possible ways (none of which fail) and must choose one of them without documenting which choice is made. For instance the expression f(a(), b(), c()) can call the functions a, b and c in any of the six possible permutations of the order: no particular order is specified. (This arises due to the order of evaluation of function arguments being unspecified, together with the sequenced evaluation of function calls.)
The specifications of C and C++ contain large number of places in which they carefully state that no requirements apply to some situation. This is not a good thing; it's basically a myriad holes punched in the requirement spec, any one of which could be a pitfall.
Few languages are so chock full of holes in their requirements.
But it is precisely specified how to invoke undefined behavior. For example when to numbers are added and they overflow the size of the type. This is carefully specified when this is or isn't undefined behavior. The C++ standard goes into excruciating detail but they have not covered everything
Compare that to many of the newer scripting languages. Elsewhere on this page I described how to recover from a segfault by setting string conversion function, which I don't think was specified anywhere. You can break some newer language by overriding innocuous function like #hash on some objects. These languages have at least as many pitfalls as C++ they are just less well described.
"unspecified behavior" in C++ is a formal term denoting a situation in which requirements are in fact given. There are multiple possible requirements, and the implementation chooses which apply, without having to document the choice. So in fact something is specified, just not entirely.
"undefined behavior" refers to situations in which no requirements apply at all. (Truly unspecified in the regular sense of the word.)
Any examples? My experience is you will frequently be bitten by undefined behaviour in C++ if you don't know the rules well but this is very rare in other languages. It's rare you're going to get a segfault or the whole program crashing in languages like Python, Java and JavaScript.
Those languages all have heavy runtime environments; if a segfault occurs it's an interpreter bug. Whether the mistakes you can make are a result of "undefined behavior" is a different question whether from the consequences of a mistake are an abrupt termination, or an abrupt termination with a helpful error message.
In the CRuby/MRI interpretter when a segfault happens it calls the #to_s method on the current execution context, to get the some memory address and technical details. Ruby happens to be flexible enough that you can override that method. Then you can call whatever you want and you have effectly "recovered" from a segfault. The parser is totally in an inconsistent state and could crash at any moment for any reason but it works often enough people have done it live in front of audiences for talks.
It is not officially specified that I am aware of that overloading #to_s on anything allows recovering of segfaults it is intended for converting things to_strings.
For more on what I mean start reading the Ruby standard, which was about 1,500 pages last I checked. Then check the C++ standard which is about 3,000 for C++11 and then another 2,500 for the standard library. C++ with its standard library is much smaller than Ruby (in terms of the functionality provided) but has at least 3 times the specificity.
You can do the same thing in C++, either by intercepting signals on Linux, or setting up a structured exception handler on Windows. It's useful when you either A) want to attempt a recovery or B) have a complex tear down.
But in C++ you are specifying a signal handler, not overloading a string conversion function.
Imagine your surprise when you discover via dmesg that your have generates hundreds of segfault per second but a string conversion functions hid this bug.
So I just went and look this up the C++14 standard, or at least the nNvember draft is 1368 pages and the Ruby ISO standard is 313. I was off by a fair amount.
But this made my point more extreme. C++ has mauch smaller feature list than Ruby and the spec is more 4x the size C++ is much more specific and precise. Ruby and many other languages will use the behavior of some de-facto implementation as their standard.
Is C++ really better specified? Some languages use "the implementation is the spec" and that's bad, sure (though I'd argue it's less bad than "programs that worked on previous versions of the compiler will contain memory safety vulnerabilities on newer versions of the compiler", which is the end result of C++'s approach to undefined behaviour). But compare to e.g. Java, which has a (shorter!) spec with multiple implementations, including a well-specified memory model in the presence of threads (which weren't even specified at all by C++ until recently), and more importantly actually defines behaviour rather than specifying very carefully the wide range of conditions in which the compiler is permitted to arbitrarily break your code.
Memory is specifically hard one for C++ because it tries not to put a layer between the coder and the underlying hardware. Much of the memory model must be implementation defined.
It wasn't that long ago that we had different pointer for small fast memory and large slow memory, back when more that 64k was a real big deal. Kind of like how now video memory is a real big deal. Now we have a bunch of weird C APIs for putting things there (I presume these can be accessed from JNI or something similar), but these are temporary ugly hacks.
When video ram becomes a more common thing to refer to specifically, what will that look like in Java? That weird memory that C++ has is ready, because it has done exactly this in the past a few times. I am also looking forward to C++17 and its newer way to run std algorithms on heterogeneous executors (like GPUs).
There is nothing to "figure out". You are putting up a false narrative that this is at all a big problem in C++. You'll have to do more work than that if you want anyone to seriously believe such an outlandish claim. The only reason such syntax bugs become popular is because because C++ has a sizable chunk of the complexity-loving developer populace who are attracted to such things. The vast majority of C++ developers have never even thought about such complexities because they don't need to. In much the same way nobody really sits and thinks how weird it is that 1 == '1' is true in JS, or how you can assign an undefined variable to itself in ruby, etc etc.
Obviously this is very hard to quantify but for me the cognitive overload of being a good C++ is massively higher than for languages like Java, Python or JavaScript. C++ to me clearly has significantly more undefined behaviour, compiler specific behaviour and other quirks along with less safety nets (e.g. garbage collection, null pointer checks).
Do you really find JavaScript, Ruby, Java and Ruby harder to master than C++?
The blog post was about a possible bug in the syntax/standard itself. I pointed out that all languages have weird edge cases and buggy syntax.
If you want to discuss programming languages in general then that is a totally separate discussion. C++ is not in the same class of languages as JS or Ruby, so why would you use them interchangeably? C++ is meant for low level system programming, when performance or size are constraints on the problem you're trying to solve. The problem domain itself entails a certain amount of care.
Also, JS and Ruby are indeed just as hard to master as C++. Heck even Bash is hard to master.
Case in point.
I agree with most of your post. The domain of C++ and JS are widely apart. Comparing C++ and JS is hardly justified, IMHO. However if we were to compare them:
> Also, JS and Ruby are indeed just as hard to master as C++.
I have to call Bull on this part, with apologies for the tone (but I don't know any better way to put it). How on earth you can claim JS is harder than C++? C++ is doubtlessly a much more complicated language than JS, and for good reasons (albeit a good portion of those are historical). However sub-optimal JS is (for it was designed with hardly any gestation time and on a ridiculously tight schedule), it can be taught, almost completely in a matter of hours (sans the DOM). Add a couple more hours for teaching browser security models and pointers on where to find docs for everything JS.
>How on earth you can claim JS is harder than C++?
Because I didn't? Please re-read. I said any language is just as hard to master since the earlier context was obscure dark corners of C++ syntax (Which was the general topic of the article. The vast majority of C++ programmers would never have run into that syntax bug). JS has obscure syntax and my argument is that you need to expend equal effort to master it.
My comment started with "C++ FAQs are full of massively complex conundrums like this for problems you just don't get in other languages" so I was making a general comment prompted by the article.
> C++ is meant for low level system programming, when performance or size are constraints on the problem you're trying to solve. The problem domain itself entails a certain amount of care.
Agree with this but my view is you should avoid C++ at all costs if practical because of its productivity and bug risk impacts compared to other solutions. If low level is required though your options are limited.
> Also, JS and Ruby are indeed just as hard to master as C++.
We'll have to agree to disagree then because C++ is an order of magnitude more complex in my eyes. Code golf isn't a good metric to me as that's all about using obscure tricks to write as few characters as possible which isn't how you code normally. I would rather measure how easy it is to write code with minimal bugs and C++ gives you a million ways to shoot yourself in the foot.
IMO, Code golf is an excellent metric. Only someone who masters a language and knows its weird edge cases and obscure syntax can make sense of the answers, or attempt to present an answer themselves. If writing code with minimal bugs is your goal, then you don't need to master the language at all. So, I don't even know what we're arguing about. Looks like we agree on most things. So lemme ask you, can you name books (or courses) that teach C++, and devote a significant chunk (or any) of the material to cover such arcane syntax topics? Yes, C++ is complex, because the underlying hardware is complex - and you want to get as close to the hardware as possible. After all, you're basically signing up to repair and service the car yourself, versus bringing it in at the dealership.
> IMO, Code golf is an excellent metric. Only someone who masters a language and knows its weird edge cases and obscure syntax can make sense of the answers, or attempt to present an answer themselves.
To me, mastering a language is knowing how to use it in a way to solve a problem in the fastest development time with minimal bugs and maintainable code which does not match up with code golf. C++ gives you lots of ways to introduce weird bugs which isn't something code golf demonstrates.
> So lemme ask you, can you name books (or courses) that teach C++, and devote a significant chunk (or any) of the material to cover such arcane syntax topics?
The C++ Programming Language book and Effective Modern C++ are good and they're full of "don't do this or something bad will happen!" parts. I'm not really concerned about a few arcane syntax problems but I am concerned that C++ has a lot of undefined behaviour that lets you shoot yourself in the foot in a way that other languages avoid.
I really find Python harder to master than C++: in C++ you sometimes have to grapple with syntax and development tools, but after it's resolved, your program works predictably, just don't forget to enable debug symbols and core dumps in release builds. Resources are allocated and freed deterministically, the memory requirements are often obvious from code down to some additive, not multiplicative, constant.
Granted, I mostly work on embedded image processing systems, but would use C++ in any context where long-running processes and/or low latencies are required.
On a more subjective note, I'm waiting for .NET Core to mature a bit, since C# seems to have a balanced selection of features from managed and native worlds, plus a very pleasant syntax and tool support.
Sorry, I don't quite understand what your point is. You will have to be more clearer than posting a link as a reply to an obvious use of a rhetorical device. But if that was the extent of the reply then you will have to excuse me from any pedantic argument.
I can see why he is taking time off C++ -- I have read his Effective Modern C++ and it did not encourage me to want to understand the depth of C++ anymore. Ruined my desire to be an expert -- does not lead to sanity.
“The object of life is not to be on the side of the
majority, but to escape finding oneself in the ranks
of the insane.”
― Marcus Aurelius, Meditations
Haha, same here. I learned C++ as my first real programming language, and I used to like the role of language lawyer at the forums. Still like the role, but now I do appreciate working with simpler (grammar-wise, not feature-wise) languages.
I've been programming professionally for over 20 years, most of it in C++. I really like developing fast code, and C++ used to be my favorite tool for the job.
But the language complexity gets more insane with every revision. It's now at the point where Scott Freakin' Meyers can't figure out the interpretation of a short little chunk of code.
I'm now actively looking for gigs in which I can develop high-performance code in a language _other than_ C++, just because it's such a minefield.
I don't follow the discussions of the standards committee carefully, but it seems like they must place almost no premium on keeping the language (and its programs) understandable by non-language-lawyers.
> I'm now actively looking for gigs in which I can develop high-performance code in a language _other than_ C++, just because it's such a minefield.
C++ was my next loved language after Turbo Pascal and I still like it.
Nowadays I use mostly JVM and .NET languages and after reading Meyers's latest book, and still following the standardization process, I have to agree with you.
On private projects I can control what gets used, but at company wide level, it is almost impossible to do so.
So one ends up with a gigantic C++ code base, many times just C with C++ compiler, with different meanings depending which ANSI version, compiler extensions and version are being used.
Considering most common companies don't follow code reviews or use static analysis tools, this just leads to quite some buggy code.
You'll go from bad to worse; Scala is to Java what C++ is to C; barely anyone understands all constructs in either language. You either know everything or nothing (i.e. you can't understand somebody else's code unless you enforce rules or know the complete language in detail).
I think this comment is fair, but could be misinterpreted.
Scala allows you to do the same thing in many different ways, and with operator overloading ... the approach you chose to coding defines almost a 'custom syntax' for a project.
Which makes it very hard to read sometimes - you have to infer the 'style' before you can get too far.
To be fair - that's a different kind of 'inconvenience' than C++ from C - but the comparison is pragmatically valid.
C++ I think is 1/2 academic, 1/2 Engineering - with a lot of voices, and a lot of anachronistic 'grey beards' arguing over stuff. Even for Java, James Gosling said 'Sun less less a company than it is a debating society'.
Scala is almost purely academic - but driven by a single person / small team. Scala was not designed to solve an Engineering problem, thus it seems a little 'intellectually masturbatory' (pardon the term).
Scala is way to hard to learn - partly because it's a 'new paradigm' but partly for the reasons aforementioned. So it's only going to make it's way into production for specific entities.
Scala may have a 'big future' if tutorials, training and approaches settle down on commonly accepted standards and practices.
I don't doubt at all that Scala is 'industrial grade tech' and is theoretically capable of supporting even banks.
But it's not going to go mainstream because the alternatives are far more accessible.
The examples you mentioned are relative small. You know what's
big? Walmart. WellsFargo. Honeywell. Novartis.
I don't think 'it will be a while' before Scala gets into those shops, I think it will be 'never' unless there is a change to how Scala is presented. As of 2016, it's still a little academic and cliquish.
Though I predict some things will change and it might gain popularity.
Yeah - that's a lot of links for one point - Wallmart.ca.
Which is cool, and doesn't surprise me.
But I don't see the trend lines of Scala heading in the direction they need to be in such that it becomes and incumbent language.
I don't see it as having enough disruptive power to break into the Javascript/C/C++/Java or even C# club.
My own experience is with Scala is that there is just way, way to much 'change' for some added value.
I wish they could make some tweaks to Java to incorporate some elements of Functional programming and I would be happy.
And before you blow up, it's possible to mix both styles, obviously Scala itself is an example of that.
A specific example: a lot of FP comes along with how collections, lists, maps etc. are managed and used. Lodash/underscore provides a lot of that for JS and makes a lot of JS 'functional-like'. If Java had such libraries with the mechanisms to use them (first class functions - Lambda's are not quite that) - then Java would be a lot better. Anyhow - to get the 'good things' about FP I don't think you need an entirely new linguistic paradigm.
When I was a teenager, it was fun to do that in C++ until I realized everybody thought it was fun but their version of fun wasn't compatible with my version of fun.
That's irrelevant. Scala simply allows your coworker to go "off the rails", employing their creativity even to language constructs, and you either spend time to learn their way of thinking, set firm rules about what is allowed, or you are simply out of luck, like often with C++.
Even if C++ have been the way I've earned money the past +20 years, I've programmed quite allot of Scala in my spare time. If find the language beautiful in its compact, expressive syntax, I also like the standard lib and the vibrant community. Being able to mix OO and functional programming, is also something I enjoy very much.
So being able to use this language as a system language, I very much look forward to.
Sure, I use and like both of them. C++ for 3D graphics, Scala for Big Data. Yet I have seen so many things go wrong in both of them so my perspective is a bit different. Imagine if somebody redefines basic operators to mean something different, starts throwing int as exception in C++ or use templates with template parameters with number parameters and you have to figure out what exactly does this thing do? Or in Scala if somebody feels super clever and redefines half of the language by preparing some half-baked barely functioning DSL unable to compile a slightly different code than expected and you have to use it? It's the price you pay in both C++ and Scala for expressibility.
It's not about dumb code. Actually both C++ and Scala can allow you unbelievably clever code which is also super difficult to understand by somebody uninvolved. Remember how some Haskell guys were complaining they aren't able to understand the code they wrote at the top of their game anymore?
Not my experience at all. There are a lot of dense one-liners in Scala but they tend to be the result of combining several orthogonal features in one place; the set of fundamental constructs is quite small and you can learn them pretty quickly, at which point as long as you're willing to keep calm, take your time, and read the one-liner like it's the five or ten lines it would be in other languages, you'll be fine.
I'm a game developer using C++ every day at work. It's the standard in our industry (at least for AAA games) and probably will be for at least another decade.
I used to choose C for pet projects (when I cared about performance) but I've been learning Rust for ~3 months now. I think I'll replace my usage of C by Rust from here on as much as I can.
I also had a brief affair writing Go but it didn't resonate nearly as well with my background as Rust does.
And yet, I'm laughing. As you know, that was what we called the C preprocessor thing back in the mid 80s (as discussed in Uni - I never used it until about 1990)
I went to Java/.NET lands and respective languages.
Since 2006, I only had one project where using C++ really mattered. A mix of Assembly and C++ was used for image codecs, which were just 5% of the whole application done in WPF.
On personal projects, I went back to it for portable code across Android and Windows Store, but now with Xamarin being part of Microsoft, I am using it for newer projects.
Java/C#, though the complexity is now matching C++, especially with all the new fashionable functional "features" most large code bases are a huge mishmash of styles and complexity that is making me wish for sensible C++ developers again.
The problem with Java isn't so much the language. It's the ecosystem and the community. The ecosystem is very rich, so there are many ways of doing pretty much anything, which means it's easy to get lost in the complexity. And the community is fad-prone. Patterns! Injection! Annotations! Something else (for now)!
Haskell has syntax that is as problematic as this one, and a huge number of extension combinations that nobody can be sure not to be problematic.
I'm running into it as much as I can too, but it does look like a language we should replace with something simpler once we get an specification that works.
The C++ usage (T x{{}}) is problematic because its interpretation is so obscure as to flirt with ambiguity and lead to program-breaking changes in compilers as their semantic analyzers improved. The Haskell line you cite isn't ambiguous at all.
About obscurity, how many GHC extensions were required to make it accept this code? (Is GADT enough?) And by the way, this is in the interface, while the C++ example is in the implementation.
Besides, that's Haskell code that happens in practice. This is one I've just looked at, if I did some research I would certainly come up with more egregious code (maybe even mine). This one probably isn't even problematic (it's hard to say from the interface), but in general, mixing those extensions does lead to some very interesting problems from time to time.
Haskell has the same excessive complexity problem that C++ suffers. I do think this is the right thing to do right now. But I can't imagine it not looking as dated in 4 decades as C++ is now.
Honestly for me… C++. C++ but with restricted and consistent feature usage at every level of the stack. This means no stdlib, no stl, no boost. Writing your own pared-down standard library is actually not that hard if you make sure none of the calling code hits edge cases. 90% of boost just makes sure it's robust in the face of anything (classes that are movable but not assignable, initializer list constructors that behave differently, classes that overload unary &, etc… just weird stuff). Honestly the only reason I do this is simply because I like writing fast code and C++ is a good tool to do that. It would take me longer to become equally proficient in Rust, D, etc, than to just write 100% dependency-free C++. I mostly work on programming language implementation though, so libraries are less of an issue than in other possible situations.
D has been catching my attention a lot though. I just wish it had something kind of like C++'s move semantics, to make implementing owned pointers (C++ unique_ptr, Rust Box, etc) possible. Maybe it does and I just haven't seen it used yet. The other problem I have with D is that most libraries seem to rely on the GC.
Re: move semantics and unique_ptr in D, take a quick peek at std.typecons (unique, refCounted) and std.algorithm.mutation.move. I haven't used them, and they are library code (and so not as well specified as C++'s counterparts), but you might find them useful. There's been good progress on the @nogc front in the standard library, too, hoping to see continuing improvements there.
My personal worry with D is that the language changes a bit faster than, say, C++ or Rust. I don't know how whether the code I write today will still compile in five years. I wouldn't mind a feature freeze on the language design and the existing stdlib APIs.
The main D people (Walter and Andrei) and/or the forums say that there is work in progress to remove GC dependency from some of the libraries as an option. Not sure of the progress or rate of it though.
Edit: Sorry, should say "main D architects" - there are many other people involved in developing D.
The STL is generally nice, I agree, and I really miss the separation of algorithms and containers in every other language.
Generally, however, I'm not really a fan the ‘generic programming’ paradigm the STL (and boost) are based around. In C++ it quickly leads to an explosion of complexity.
And as for the best part of C++, RAII and sparing use of templates. ;-)
I would like to thank Mr Meyers for his Effective C++ book, back in the 90s. Said book for me was the tipping point with C++. Having used Borland Pascal-with-Objects, and read about Eiffel, that book confirmed for me, at least, that C++ was a tool of last resort for trying to build on top of existing C code.
I was skimming his most recent book and I got the impression that a large part of it was on pretty much the same esoteric nonsense of various && constructs as this blog post.
The blog post has an excuse, I'm not sure the book does.
In fairness, it was a good book. But it's like combat training. You don't want to be in a combat zone. BUT, if you are, you had better know what you are doing.
Amen. I was just thinking the other day about how I used to really like C++ but how glad I was to not have to use it anymore. I haven't worried about "the most vexing parse" for years! There was this last week, and yes it's contrived but it suggests something has gone horribly wrong: http://stackoverflow.com/questions/40462612/moving-a-member-... .
It seems to me C/C++ has always had problems. K&R C had weird stuff like char pointers used to point to anything. And modern C++ has odd complexities like what Scott's article illustrates.
Was there ever a Goldilocks moment when C was just right?
C works great, as an alternate to assembler, to bootstrap unix up on meager, 80s style, hardware.
Based on things like the shell tools and languages like awk (and other related descendants), I don't think even the unix creators meant for much application level work to be done in C, though, but by assembling components in higher level languages. Try telling that to The Management and all the macho/masochistic Real Programmers, though. Bounds checking? Memory management? (names you can identify to the left of the types, like in Algol, instead of to the right?) That's for sissies!
To be honest, I seldom used lint, either, when I was starting out (although I suppose I cheated by using the Borland IDE in the early 90s). However, when using gcc in the mid to late 90s, it was just too easy to tack on -Wall (enable all warnings) to the options. No real excuse not to always check things, at that point.
I'm struggling with this too... I'm implementing a shell, and on the one hand I've been looking at ancient C code in bash, dash, zsh, mksh, etc. That's clearly not the right way to do it, with raw strings, pointers, and globals everywhere. It's ridiculously verbose and error prone. All recursive parsers in C seem to use setlongjmp for error handling. They have weird memory management with stack allocated strings and globals too, because memory management in C is such a pain.
I tend to like "C with classes". When I started working at Google, the C++ style was a breath of fresh air over what I had seen in the game industry (this was in 2005, so C++ was in a very different state). It was very much C with classes: no exceptions, no iostreams, no operator overloading, no default copy constructors or assignment, in/out params, etc. (Lately it has been trending toward "modern C++", albeit with custom internal libraries for errors and such)
Then C++ 11 came out -- you really can't do without auto and range for loops. It looks like strongly typed enums are useful, and some STL stuff. For recursive parsers, exceptions instead of setlongjmp seems useful too (setlongjmp doesn't work with destructors AFAIK). I'm scared of R-Value refs and move semantics -- I don't know anything about those.
I guess my point is that it's 2016 and it's hard to know if there was a goldilocks moment. Google was using a restricted dialect of C++03 which I think was pretty great and built a lot of real software, but it's hard to say that C++11 didn't add anything useful.
I'm sure the Google style diverges pretty wildly from Bjarne's new effort of C++ core guidelines, which I need to look into more:
It seems like the LLVM codebase really knows what it's doing in terms of C++, which makes sense, but honestly it's fairly complex for what I want to do. So I think we are just stuck with all these local dialects, which makes code reuse a problem.
I'm not quite sure what you mean by 'goldilocks moment', but I think C++11 was it. Ever since then, the language has had a nice balance of high level abstractions at low level performance. (Theoretically it actually got faster due to move semantics)
The parent was asking the goldilocks question -- was there ever a point where C++ was not too complex like modern C++, e.g. this {} issue, but had enough features (contrast with C, which doesn't have enough features for many applications). I don't think there was such a moment.
Apparently move constructors were a de-optimization in the case of std::vector. As far as I remember, it's related to iterator invalidation and move semantics making small-size optimization impossible (e.g. inlining small vectors into the object itself rather than having another pointer and heap allocation). I'm pretty sure it's in this video:
Although it's debatable how big a deal this is, it's a good example of the point. People like "C with classes" because they can read the source and kind of figure out what code is generated. They can reason about locality.
Although I don't consider myself an expert C++ programmer, if I can't reason about the consequences of move constructors on std::vector, which is a fundamental and relatively simple data structure, then the language is already getting into "surprising" territory. I would have never figured out that de-optimization unless I heard it in a video.
On a related note, I wish that return value optimization had better syntax -- that would make move semantics unnecessary in a lot of cases.
On the other hand, reasoning about the generated code is already a lost cause because assembly code is so complicated now. We now have to live in a world where we can never fully understand our tools.
At what point is it mentioned in the video? AFAIK small optimization on std::vector was already invalid since .swap() needed to keep existing references/iterators valid, couldn't throw or call copy constructors/assignment.
RVO being transparent would be nice, though. There's a proposal somewhere to make it guaranteed in some circumstances, but it's still fairly opaque when reading code
Sorry I don't recall offhand. It was definitely Chandler Carruth who mentioned this, and he has a handful of videos on YouTube from CppCon. They are all worth watching if you care about such things :)
I don't know the details but I'm fairly sure that the issue is that move constructors in C++11 made small size optimization impossible in some cases. I think he works on the LLVM optimizer so that would be a primary source and not hearsay.
On top of assembler being complicated, the CPUs contain another sort of compiler and optimizations, so it gets even harder to know what exactly happens.
I thought that moving is optional in a sense, and the compiler could decide to copy instead of move (?)
C was already bad when it was born, comparing with what Burroughs was doing in Burroughs B5000 in 1961, nowadays sold by Unisys ClearPath MCP. Or other similar languages on those days, like Concurrent Pascal on Solo OS (1975).
When I learned C in 1993, it seemed quite poor compared to my Turbo Pascal 6.0 type safety and respective language features.
The main advantage of C was that it allowed you to type stuff super fast comparing with Pascal ( {} instead of Begin .. End etc.) and had some very handy shortcuts for often used operations like ()?:, ++, --, += etc. Less verbose language, less need to type syntactic structures. You could basically keep the flow of what is in your mind with the progress on your keyboard and forget about typing it; you were simply assembling stuff in your mind and at a rapid face making it happen on a computer. Something similar you can now probably experience with Go, sometimes Scala.
Typing fast usually doesn't lead to working programs that solve what is actually the customer's problem.
Software engineering isn't about "keeping the flow", rather about achieving the quality, deliverables and desired outcomes required by the users of the software.
Is this a typo for Pascal? When I did the ICPC, Prolog was not a permitted language in my region, and IIRC not at worlds either. But Pascal was (which I always found quaint, but perhaps that was my provincialism).
> The main advantage of C was that it allowed you to type stuff super fast comparing with Pascal ( {} instead of Begin .. End etc.)
Speaking as a developer who works mostly in C and C++, that's a pretty crappy advantage. Based on the answers here[1], it seems like the real benefits were a more permissive type system, better library, separate compilation units, and better ways to directly access hardware.
All the advantages of C over Pascal, usually disregard the extensions that almost all Pascal implementations had.
All the points you mention from Quora were covered by Object Pascal, Turbo Pascal, Quick Pascal, TMT Pascal, Think Pascal, VMS Pascal, HP Pascal,....
Yes there were lots of dialects, but writing C code back in the day each C compiler outside UNIX had their own view of what compiling K&R C was all about, C wasn't any better in portability across compiler vendors and OSes.
If it wasn't for UNIX's adoption, C would have been just yet another systems programming language.
This was my personal experience, where learning C allowed me to make a 3D engine with Gouraud shading in a few days, where the same in Pascal would have taken a lot longer, mainly because of necessary syntactic sugar. C just allowed to go super fast and didn't restrict experimenting as much as Pascal did (which is still a pretty permissive language), and even adding inline assembly code to render fast felt natural, unlike in Turbo Pascal. And this speed carried over to competitions.
You could do some tricks with GCC like not specifying arguments in your MOV instructions and let GCC use the optimal registers it calculates during compiling. Practical consequence was that instead of doing some default MOV AX, [something] then MOV BX, [something], then e.g. MUL BX, you could have avoided the first two operations as GCC would chain them together properly without the need to fill in values; so by inlining such code you'd get very very close to hand-written assembly language without additional overhead, speeding up your 3D engine significantly.
I'd say the two more important advantages were a) the bare-metal view C gave you, down to individual bytes, which was important for systems-level programming, and b) the lack of overhead from things like bounds checking and garbage collection, which was useful on the butt-slow machines in the 70s and 80s.
Turbo Pascal (from early versions) could very much handle individual bytes (and IIRC, had bit manipulation primitives like bitwise and/or/xor/not, shr, shl, etc.), and could even address memory at fixed locations, such as the video memory on the IBM PC (and segment:offset x86 memory at that). Don't know about other Pascal implementations, such as the others pjmlp has mentioned elsewhere in this thread, but would not be surprised if some of them could as well. I agree with pjmlp's statement (again, elsewhere in this thread), that C is probably being compared (to C's advantage) against early standard Pascal (which was a limited, academic language), not the commercial implementations which would have been / were more pragmatic. And I say this as a guy who liked and used both Turbo Pascal and C (C on DOS, Unix and Windows) a lot, for long, earlier, including being team lead for a successful Windows C database middleware product.
T() had different meaninga depending on T and the context. It could be value initialisation, a cast, a function declaration(!). Also, when used with arguments, you couldn't use it to uniformly initialize aggregates and non aggregates, which is important in generic code.
The T{} syntax can be either a constructor call or aggregate initialization and never a cast or function definition.
It's really hard to read this article on my phone because the code sample overflows to the right and left swipes are aggressively interpreted as a navigation gesture.
X<T>{{}};
will:
for T = DefCtor call the initializer_list constructor with a single default constructed element. The outer brakets are for the initalizer list and the inner ones are for the DefCtor constructor. The brackets for T constructor itself are allowed to be elided.
for T = NoDefCtor, the previous overload resolution is not valid as NoDefCtor has its default constructor disabled, so the inner brackets match an empty initializer_list, while the outer ones the X constructor.
finally, for T = DeletedDefCtor one would expect the same as NoDefCtor, but there is a quirk in the language: explicitly disabling the default constructor still allows aggregate initialization (which is enabled for any class that doesn't have otherwise a constructor); this allows for the same overload resolution as in DefCtor. This is considered a defect and will be hopefully corrected soon.
While initializer_list allows for some nice syntactic sugar, I think it is now generally considered to be misdesigned, especially as it breaks the otherwise great uniform initialization syntax ({..}) that was added at the same time in C++11. There is an ongoing effort to fix the it, but it is hard to do without breaking backward compatibility.
edit: formatting