Today I was at the Istanbul courthouse third time this year. I attended a trial defending myself to a judge. Then I bore my testimony to a prosecutor about a different case. Both cases were about the free speech platform I own in Turkey. Meanwhile in the world, one of my older posts in Stackoverflow became Hackernews #1 & reddit/programming #1. I wish it was Turkey which made me feel better about myself, not the rest of the world.
The poor compilers do the best with what they have. And by "what they have", I mean the things they can't make assumptions about. Which turns out to be a crapload. Until that happens, very highly-tuned assembly will continue to outperform the best compilers.
Of course, a more typical scenario is hand-tuning a few loops where 99% of the clock cycles occur and letting the compiler take care of the rest.
(Also, I was attempting to send a Morse code message by flashing the vote counter between up and down, but I don't think anyone got it :( )
I used to work on a compiler for an embedded system. One fun trick we used while writing the run-time libraries was to write them in C, then optimise the generated assembly (possibly by rewriting it from scratch), then fix the compiler so it generated that assembly :). Sometimes that did require using (and occasionally adding) new intrinsics.
I worked in the same company (Hey Andrew!). It would definitely depend on the specific instructions/optimisations involved but I believe they were generally pretty robust to changes. Some would require the user to supply pragmas (i.e. to specify data alignment, etc) but obviously the compiler intrinsics would always generate the expected instructions.
Most of all, the compiler can't change the layout of your data. Many of the use cases for hand-crafted assembly involve SIMD instructions, which perform several operations in parallel. These vector instructions can be incredibly fast, but their limitations often mean that you need to carefully design the data structures of the whole program around the optimization of one single critical loop.
I've written lots of code that either GCC has autovectorized into SIMD instructions, or that I've represented using GCC's vector types which produce SIMD instructions. While it is true that a poor choice of data structure will preclude SIMD optimizations, use of a compiler does not preclude the same.
At the same time I have seen autovectorization to fail on things that would look like obvious candidates or to generate still sub-optimal code. I would prefer autovectorization over hand-written SIMD code everyday but that key here is reliability. If I cannot guarantee that the performance is going to stay the same across different compiler releases (which is often that case with autovectorization) it is much more convenient to just write the optimization myself and be sure that it is going to happen regardless of the environment.
> the compiler can't change the layout of your data
Sure it can, this is the most promising optimization avenue currently. Not just on the micro level to make data layout more SIMD-friendly, but more cache friendly object layout etc.
Modern systems are primarily memory bound, not ALU bound, so squeezing out clock cycles from computations is pretty far in the dimishing returns territory.
C/C++ have forbidden these kinds of optimizations but it's a the future (tm) in high level languages.
There are some magic keywords, like const, which definitely help the compiler. Unfortunately these kinds of optimisations are far too numerous, and soon become tedious. After a certain point, more structured languages could allow much more rigid optimisation. These days JIT seems to have taken over though, which isn't really a bad thing.
Const doesn't actually help compilers (at least in C/C++ based languages). Since it can be const_cast away and still work in a standards compliant way, any optimisation assuming const values never change is a bug. Instead the compiler will use other techniques, e.g. finding variables that are only written to on initialisation, and those don't even need to be marked const.
I've seen gcc turn a const bool& into a direct pass of a bool by value. Of course, I noticed this because it was the implementation of a web service binding, and the wrapper was going to pass me a bool& (as a pointer) whether I liked it or not. Instant segfault...but not always :(
If an object is defined as const, casting away its constness and modifying the object causes the program's behavior to be undefined. Compilers are free to take advantage of that.
That is true, but it's extremely rare to be able to take advantage of that. When people think of const as a potential optimization aid, it's almost always in the context of const pointers, which actually do nothing at all for optimization.
Do you have a spec reference for that? I was under the impression you could safely const_cast and modify pretty much anything. Perhaps it was just primitives like ints, or const member functions.
Thing #1: In C++ the word "const" can mean many things depending on position and inflection: declaring a symbol an actual constant; a pointer argument has in (as opposed to in-out) semantics; a function has no side effects. I keep forgetting which use of const does what.
Thing #2: Each use of const, regardless of meaning, induces a different type. Which makes API design a pain in the ass since you can't pass in stuff that wasn't declared with the same constness if the procedure expects const parameters without casting the parameters. I actually worked on a codebase where a guy broke much of our code by checking in a "fix" for our APIs to make them comply with what he considered best practice regarding constness.
Thing #3: Passed-in pointers and references are not 'const' by default. You should have to declare intent to clobber their referents.
Thing #4: And as someone said downthread, you can cast around it in most cases anyways.
This is why I say, if you're thinking of using C++, reach for Ada instead. Ada's reputation has been necessarily tainted by whiners complaining about its verbosity. But I swear that figuring out a page of Ada code is easier than figuring out a few lines of sufficiently complicated C++. For one thing, in Ada, function/procedure parameters are 'in' (i.e., const) by default, and you must declare intent to modify parameters in side effects by declaring them 'in out'.
Cute. Although I think my compiler has different things to say.
Hello, I'm a C compiler that still can't handle C99. I hope you're wearing waterproof clothing because I'm gonna throw up on you. Also, I've been drinking heavily so your C++ code is going to take a while to compile and when it's done, it's gonna smell funny.
Visual C++ is a C compiler - it compiles C90. The issue is that they still haven't added support for C99. You can compile your C code as C++, but then you're restricted to the common subset of C and C++.
Bundling C and C++ compilers together was a way for C++ vendors to ease the migration into C++ land, specially in the early days. There is no law that C++ compiler vendors are required to offer a C compiler as well.
Given that on Build 2012 it was mentioned that the Windows team is making their code compilable under C++ and the actual position on C++ vs C at Microsoft, the C compiler might even be dropped from Visual Studio.
I noticed you couldn't optimize my code to use SIMD so I went ahead and used inline assembly. It will probably take another 30 years before you can actually think like a human and perform optimizations like this.
It takes a really good developer with vast knowledge to do optimizations such as using "inline assembly" for stuff like SIMD. And even though there are enough good developers able to do this, the mother of all problems when developing software is managing complexity.
Yes, you can take a subroutine and apply local optimizations on it. Building complex software in assembly that on the whole is better optimized than what a compiler can do is next to impossible.
Speaking of SIMD and stuff like it, there are already optimizations that LLVM is doing, but such optimizations are hard to apply ahead of time because (1) if you want to distribute those binaries easily, then you need to compile for the common denominator (which is less of an issue with LLVM) and (2) your programming language sucks. It's not the compiler's fault, but rather your own fault that you're using a programming language so confusing that inferring intent from your code is next to impossible. How can the compiler know that you're sorting freaking numbers if you're specifying exactly how bits move around in memory while doing so?
If you're speaking about virtual machines though, there are projects out there for .NET or Scala for instance that can recompile/retarget code at runtime to run with SIMD instructions or on your GPUs if you have any. All you need is a virtual machine that runs bytecode and a programming language (slash compiler) that lets you access at runtime the syntax trees of the routines that you want to optimize and that lets you generate new bytecode. So you can easily shove this kind of optimizations in libraries for special-purpose and descriptive DSLs (e.g. LINQ).
Of course, it gets tricky and doing stuff like this at runtime has overhead, but it's better than what 99.99% of developers can do, not to mention that good developers first and foremost ship.
> It's not the compiler's fault, but rather your own fault that you're using a programming language so confusing that inferring intent from your code is next to impossible.
And that's the long-running argument for why high-level languages have the possibility to be compiled to faster code eventually. We are mostly still waiting for the languages and compilers. (Even though ghc gives us hope.)
Hello, this is the compiler calling back at you. I am very sorry that I cannot write efficient SIMD code but neither can you. If we can work together, we will outperform our individual selves as a team.
So you go ahead and write clever SIMD code but please use the SIMD intrinsics I can understand, not inline assembler which I cannot do anything with. You are very good in expressing algorithms in a SIMD friendly way and do a decent job in instruction selection.
You, however, are not very good in doing instruction scheduling and register allocation, so let me handle that and we can achieve a result that can keep the CPU pipelines busy. When you make the tiniest change to the program, I can re-do instruction scheduling and register allocation in an instant, when that would take you hours to rewrite the whole algorithm to use different registers.
my point: SIMD intrinsics + a smart C compiler produces a lot better code than a programmer writing assembly. Clang + vector extensions in particular is very good at it.
This has never quite been my experience. Look-mummy-my-first-SIMD-routine is usually within spitting distance of the compiler's best effort, and a lot neater-looking. Then you can just take it from there.
The compiler's output is far from useless, because compilers generally know a few tricks and/or cute sequences that you can copy. And using intrinsics is certainly better than relying on it to automatically vectorize things itself - I've never heard of that do anything remotely interesting. But, even though it's 2013, it still seems like the compiler is easier to beat by hand than you'd think.
I make no claim that this is worth doing in every case.
(As for the suggestion that the compiler can't rearrange inline assembly language, or provide symbolic register names - whyever not? Perhaps people have been conditioned by gcc and VC++ never to expect anything except the absolute bare minimum from inline assembly language support, but that's no reason to assume that these tasks are impossible. Code Warrior for Wii would rearrange assembly language for you to avoid holdups due to instruction latencies, and while automatically assigning registers for you. I believe most modern assemblers will do similar reordering and at least let you assign names to the registers you're using.)
Reminds me that the best current chess player is actually a team mixing humans and computers. Even in that domain where computers are undoubtedly better than humans, the best solution still involve humans.
I have picked up your clever optimized SIMD and decided it was better to change the execution order, because another unit was bored without nothing to do.
Unfortunately the L2 cache seems to have some issues getting all required data for your instructions due to the way all your threads are manipulating the required addresses from multiple cores.
I have to admit this is the painful truth. The speed of the vector operations didn't actually improve using SSE3. But that doesn't dismiss the issue with compilers not being able to actually vectorize properly.
30 years? ICC already does automatic vectorization pretty well. LLVM and GCC have implementations that need more tuning. I bet they'll be solid within 5 years.
You still beat them with inline assembly, but you probably won't accelerate the code vectorwidth times anymore.
> 30 years? ICC already does automatic vectorization pretty well. LLVM and GCC have implementations that need more tuning. I bet they'll be solid within 5 years.
Unfortunately, the automatic vectorization compilers do is not very smart and is targeted at simple scalar loops in existing code bases. In many cases, a decent programmer can easily outperform the compiler with simple SIMD tricks, especially in cases where you can use clever vector shuffling tricks (e.g. sum of 4 numbers with 2 additions + 2 vector shuffles).
To get the best of both worlds, use compiler/machine specific intrinsic functions or vector extensions with a smart optimizing compiler. Clang's vector extensions work really well.
How would you sum 4 numbers with 2 adds + 2 shuffles
(sorry, I'm rusty in SIMD)
But remember, similar (old, but simpler) tricks were adopted by compilers, like xor to load 0, lea to do math (not sure about this one), shifts instead of multiply/divide etc
> And no one does inline assembly for that, they use intrinsics
Depends on your architecture. With ARM, compilers will often not set the alignment bits in vector loads and stores, and this can be a big performance hit depending on the microarchitecture. In general, compilers sometimes deal poorly with instructions that have particularly strange register constraints, or loads/stores with address writeback.
> It will probably take another 30 years before you can actually think like a human and perform optimizations like this.
ITYM "It will probably take another 30 years before I use a language that tells you what the required semantics of the code are, meanwhile I'll use my requirements interpretation privileges to relax the semantics in a few places"
Ignoring for a moment that compilers are getting better at automatic vectorization, using a compiler allows you to write highly optimized inline assembly for the (usually) tiny parts of the code where it really matters, and gives you the convenience of a high level language everywhere else.
I noticed that it takes forever to understand inline assembly. I don't foresee myself ever thinking like a computer and reading assembly as well as a high-level language.
This answer purports a myth that compilers are magical black boxes, the sum of millions of hours of intense academic research that "you will never understand".
Replace "compiler" with "computer". Doesn't that make you angry? Answers like these do nothing but prevent people from learning about them.
If you are interested in compilers, here's Edinburgh University's notes from the course "Compiling Techniques", probably a good place to start. Don't let internet tough-guys stop you from learning.
>computers are magical black boxes, the sum of millions of hours of intense academic research
Isn't that awesome? I can use the thing without ever understanding what it does :) I'm so glad my car works without me ever having to know anything about how it works. I know it requires some form of money (gas) to work. That's it. Money in -> Transport out. Perfect.
The linked post never proclaimed no one "will ever understand" compilers. That's just what you're reading into it. You try to get upset at it, so you do. It merely proclaims that compilers are incredibly complex, which is the nice thing about them. If compilers were stupid they'd be a lot less useful.
I mean, the basics of parsing and lexing are easy enough to understand & do and forming an AST is straightforward but all the "clever stuff" adfer that, all the hundreds and thousands of optimisation tests that are done - they are mind boggling.
Each one by itself is pretty straightfoward but start adding them up and layering them on top of each other and it gets crazy pretty quick.
That doesn't make me angry. I fully accept there are aspects of the stack that allow me to type this message that I will never understand.
Can you e.g. explain how a transistor works, from basic quantum mechanical principles? If not, then there's something in the stack you may never understand, because you're simply not interested in spending the time necessary to understand that at the level a physicist does.
I think you might have misinterpreted the context of "optimizing a single line of yours using hundreds of different optimization techniques based on a vast amount of academic research that you would spend years getting at" was lost.
He isn't saying compilers are based on a "millions of hours of intense academic research that you won't understand". He's saying the specific optimizations implemented by these compilers are based on "millions of hours of intense academic research that you won't understand".
While the general theory behind compilers isn't particularly complex (and is totally worth learning! Your message about people learning about compilers is spot on), production compilers do employ a significant number of complex optimizations. You won't be able to match these unless you spend a very significant amount of effort learning about these specific optimization techniques.
While learning the theories behind compilers is quite useful, I'd argue that learning N tricks to more efficiently translate C into ASM isn't practical unless you want to turn your compiler into a production one.
Sure you can understand the basics of a compiler from a college course on them. But what sets apart industrial level compilers is all the accumulated knowledge built into them. Students are expected to understand the whole compiler they build for class. But how many people understand every single optimization that GCC can do?
My point is, there's understanding how compilers work, and then there's understanding all of GCC. I wouldn't like it if someone made the former sound too intimidating for me, but I don't mind feeling intimidated by the latter.
Keep in mind that it's an answer to the question "Why don't we program in assembly?"
Computer Science majors should be familiar with how compilers work and the theory behind them. But the answer to the question is that for 99% of cases, using a higher-level language and leaving the optimization to a compiler written by subject matter experts makes sense. (ie. Don't re-invent the wheel.)
And on the Fourth Day, God proclaimed "Thou shalt have the ability to use inline assembly in thy C/C++ code for performance-critical tasks".
I can think of absolutely zero reason to write an entire program in x86 assembly, let alone any other kind of assembly (GCC spits out some pretty optimized code for my little Atmel MCU)... It's a lot nicer to write everything in a high-level, and then write any performance specifics in inline assembly.
The really cool thing to see is how other newer languages have adopted this scheme (e.g. PyASM for Python, or the ability to edit the instructions for interpreted languages that run in their own VM). And as always, great power comes with great responsibility ;)
For x86 I'd say study but I don't think there's any fun to be had there. Something like MIPS32 would be pretty fun because that instruction set actually makes sense to a human.
8086, maybe, but I don't think we can say that modern x86 was "designed" for any one thing at all. Modern x86 is an excellent example of what you get when you evolve something for decades with backwards compatibility as a paramount requirement and little other consistent guidance, for all the good and bad that implies.
writing 4096 byte demos and similar, back in the day, I can assure you I had fun coding x86 asm :)
maybe if the Asphyxia tutorials hadn't been written for Turbo Pascal (+inline asm) and the C-compilers (back in '97) were really that good at optimizing as comments here suggest, I'd have gone a different route. but still, that's just performance. I doubt a compiler can beat a human at size optimization, cramming as much audio-visual power into as little bytes as possible--although executable-packers have come a loooong loong way since then (and impressively, afaik mostly by research and not so much Moore's law).
Now a days, compiler writers sees zero reason to write inline assembly.
First of all, it throws a wrench in high level optimizer's analysis. Plus, compilers and processors are getting smarter day by day while inline assembly in shipping code becomes more and more untouchable everyday.
I've written code for GameBoy Advance which has no OS, just magic memory addresses, and didn't need any assembly. Even hblank interrupts could be implemented in C.
A typical reason to use assembly these days is for instructions the compiler doesn't output (for instance specialized instructions which are only useful for a kernel).
GCC's extended assembly at least (which clang/LLVM also support), allows one to specify constraints for asm() statements that let the compiler respect dependencies etc.
Fascinating, while I was reading that, it got 5 upvotes! Mindblowing.
This is a really awesome description too. The only thing I know about compilers is that I implemented one for class (without fancy optimisations) and I am surprised any software works, ever. Compiler are just ... mindbogglingly complex things. Almost as much voodoo dark magic as engineering.
One of the reason software still works is that compilers are (mostly ()) very easy to test. Just just need an input source code, and expected outcome (either compiler error, or the result when running the code). No intermediate state to consider, no concurrency, no test database to set up etc.
Oh, and many serious compilers can compile themselves, so bootstrapping is a pretty good test too.
Another reason is that compilers simply must* work for software development to continue, so people have spend the necessary amount of energy to get implementation and tests "right".
(*) There are always exceptions, like when a compiler auto-parallelizes code and introduces a race condition that is rarely triggered. But those cases are blessedly rare.
There are tools available that can generate random (valid) C programs, comparing the results generated by code from different compilers [1]. CSmith say bugs have been found in every tool tested [2] - from GCC to costly commercial compilers.
What did you write your compiler in? Writing compilers in, say, raw C is indeed a complex endeavour. But using, say, OCaml or Haskell (which is secretly a DSL for compiler writing) should make it much easier to not fail silently.
Anytime I read about the topic of assembly language, I can't help but think of Michael Abrash. For example, check out Chapter 22 [1] from his Graphics Programming Black Book entitled Zenning and the Flexible Mind for a pleasant stroll down Optimization Lane.
You might also enjoy his book entitled The Zen of Assembly Language which features the Zen Timer (a timing tool for performance measuring).
The best comment:
"Thank you compiler, but perhaps if you weren't commenting on StackOverflow, you could get me a drink and play some nice music while you're working?"
Stackoverflow is plagued by overzealous editing IMHO. I understand it's hard to balance it, but a question of mine recently got 3 downvotes and a close, to be reopened later! And now it even has 2 upvotes.
A guy answered it immediately, while some other peeps started whining about how it is not a proper question! I only wanted a quick fix, and I got it too, while the others nitpicked.
I treat SO like a technically superior colleague - if I'm stuck with something and have exhausted my other options (documentation, examples, research, SO search), then I'll phrase my question clearly, showing my progress and what I'm stuck on. This makes me look good to my colleague, and enables him to understand where I'm coming from. A lot of the time by writing out the question in full I'll be able to solve it myself by getting my thoughts organised.
For some reason that change was attributed to me when all I did was adding a comma after the »Hello« o.ô
EDIT: Ouch. I think those are SO's automatic attempts at making answers less chatty by removing salutations and thanks and the like. I'll rollback or try fixing it by doing evil Unicode things, then.
Hey, my name is ICC, and I'm one of the most respected compilers in the industry. I also sabotage your code so that it works poorly on AMD CPUs, while making sure that Intel CPUs run my code at full speed. After all, Intel likes to establish market dominance.
Blind trust in the compiler is bad people. Good luck discussing this issue without any Assembly Programers who can fully understand what is going on here.
I have little idea what modern day compilers are doing, or what the CPU, or the operating system is doing for that matter. Often, way too often, compilers fail, hardware fails, operating systems fail, lots of things fail. I am not going to read the millions of lines of code written by other programmers (in f-ing emacs no less) in the any number of differing complex beasts, the compilers. It seems crazy-making to me, that other programmers would create compilers that would use millions of possibilities of optimizing a single line of mine using hundreds of different optimization techniques based on a vast amount of academic research that I won't be spending years getting at. I do feel embarrassed, yes very icky, that I have little to no idea what a three-line loop will be compiled as, but bloat would be my guess. There is risk in going to great lengths of optimization or doing the dirtiest tricks. And if I don't want the compiler to do this, I have no idea how to stop this behavior, nor do I want to invest in the specific knowledge of the nuances any particular compiler. The compiler does not allow me easy access because the compiler itself is an overly complex piece of software written by other programmers. I could care less about how a compiler would make my code would look in assembly, on different processor architectures and different operating systems and in different assembly conventions. Transformation comes with how we as programmers write code, not in compiler-fu.
P.S. Oh, and by the way if I really wasn't using half of the code I wrote, I would throw the source code away.
You seem to be saying that you're at the same time completely clueless about how programs get built and executed and yet you know better that the compiler what needs to be done.
I've found that in the general case Compilers Know Better. It might not be true for very simple and limiter architecture such as small microcontrolers, but modern CPUs are so much complex and "quirky" that most of the time the compiler will beat you.
These days I only use assembly for very low level stuff where I need complete control of the execution flow (dealing with cache invalidation, MMU etc...) or some very specific and aggressive optimisation (like implementing <string.h> in ASM).
But hey, if you want to ditch C and write everything in ASM, be my guest (as long as we don't work together).
The reverse is actually true. On simple orthogonal architectures the compiler is much more able to take advantage of all the features, while on x86 there are many instructions a compiler will never generate because of limitations on when they can be used. A human can occasionally do better then the necessarily limited algorithm used for scheduling and instruction selection.
My rationale was that it takes a much bigger knowledge base to optimize code for more complex architectures because of all the factors you have to take into account.
On simple architectures it's often very straightforward and you can sometimes save CPU cycles/memory space by doing some refactoring the compiler can't/is not allowed to do.
You can for instance "stash" certain important values in CPU registers across functions, disregarding the ABI to avoid pushing/popping the stack or moving stuff around in registers. Well, that's not a very good example because you can tell GCC to do exactly that, but it's not a standard C feature and the compiler cannot do that on its own across translation units. So it's one of those situations where you can "outsmart" the compiler, even if the compiler would probably tell you it's not fair game :)
It's not orthogonal vs. crazy x86-y instruction sets, it's in-order vs. superscalar processing. The compiler knows the intimate details about each architecture's pipeline latency to a degree that most programmers don't.
> I have little to no idea what a three-line loop will be compiled as, but bloat would be my guess.
This is unfortunately too often true. Some compilers are tuned too much for looking good on artificial benchmarks, in which turning 3-line loops into thousands of instructions sometimes helps, even if it hurts on most real-world code.
> And if I don't want the compiler to do this, I have no idea how to stop this behavior, nor do I want to invest in the specific knowledge of the nuances any particular compiler.
The -O0 option, or its equivalent, is pretty easy to find in many compilers. If you're happy with the performance of your code without all those fancy techniques being applied, feel free to use it. Most people aren't ;-).
> P.S. Oh, and by the way if I really wasn't using half of the code I wrote, I would throw the source code away.
Only if you were aware of it ;-). I wish that compilers would focus a little more on helping me make my code better, rather than so much on magically making things better under the covers.
If you want your compiler to do more to help you improve your source, you might like Go, which offers a lot of (sometimes unwelcome) mandatory suggestions that force you to do the right thing even when you don't want to. (Ada and Pascal do the same thing according to a very different philosophy.)
I have come to love the strong typing of VHDL (very similar to Ada but for hardware design). After using it for a while, in my uninformed opinion, I think it can drastically reduce bugs because it assumes nothing and makes the programmer define exactly what they mean.
Hello. I'm an assembly programmer. I used a compiler to generate the majority of code, and can hand-craft any assembly that comes out of it. I understand how compilers auto-generate SIMD instructions can be more easily compiler-generated if I make a "struct of arrays" instead of "an array of structs".
TLDR: Real performance programmers need to understand the assembly a compiler generates if they hope to tune the compiler to generate optimal assembly. Also, GCC -O3 is prone to removing too much code and reordering it, causing memory barrier issues and the like. All multi-threaded programmers need to understand how the compiler generates assembly (ie: by reordering your code), and how it can generate new bugs if you don't use the right compiler flags.
Also, GCC -O3 is prone to removing too much code and reordering it, causing memory barrier issues and the like.
Whoah, that's what __sync_synchronize() and volatile are for. If you're trying to write order-dependent multithreaded code without those, the bug's in your code, not in compiler flag juju.
Only questions are closed or locked, I think. And only if they are not very good questions that will just lead to debate or opinions but should stay around out of historical interest. This here is a question that can be answered reasonably but has a single whimsical answer. So no need of closing or locking here.
And even though the tone of that answer is humourous it still is a good answer, explaining why we don't all write Assembly instead of HLLs.
I'm working on a project where the CTO of this huge company is criticizing the lack of code I'm writing. Doesn't matter the fact that my code meets every criteria. No functionality is being left out. But, he wants more code. It has been very difficult for me to deal with this, because its the first time I have ever faced such idiocy.
I never noticed that the Stackoverflow js pulls updates for vote tallies in real-time. Browsing this answer while HN is sending lots of traffic there is almost like watching a car odometer.
It is funny how people want to believe in tools. In fact, the optimizations compiler does are incomparable with those programmer could do by choosing an appropriate data-structure with corresponding algorithms and by being aware of strengths and weakness of a particular CPU architecture.
JVM, which is nothing but a stack-based byte-code interpreter is the most famous case.) People seems to believe it can do wonders, especially in memory management and data throughput.
It is so strange to see how people are trying to create a whole world inside a JVM. What it is called when people are building models of ships inside a bottle?)
btw, now, it seems, they are trying to build a whole world inside a V8 bottle.)
My experience in the industry is just the opposite; people will do anything to avoid admitting that the tools can do better than they than. I've had senior programmers sneeringly declare that anyone who uses a debugger must be an inferior programmer, and then watched them spend a full day figuring out a bug that could be found in half an hour with the appropriate tool.
You know why it's worth recreating the world inside a JVM? Because its behaviour is so much more thoroughly specified and predictable. Prior to Java, C++ didn't even have a memory model; every new release of GCC breaks a whole raft of C programs that didn't realise they were invoking undefined behaviour by having integer overflow. By the C standard, even a single instance of undefined behaviour invalidates your whole program, making it virtually impossible to predict the behaviour of any nontrivial-sized codebase.
I half agree. C++ directly exposes the underlying computer architecture's memory model. Java abstracts it reasonably well and essentially results in a different model with different advantages and disadvantages (e.g. garbage collection and garbage collection :))
Schemes and Lisps have memory model, it's just one that's radically different enough from processor memory models that no-one realizes it's a memory model and thus no-one gripes about it being different from processor X's memory model.
If you're wondering what the memory model is, you should ask yourself: What happens when I mutate a cell? are those changes visible in references to that cell? are those changes visible across thread boundaries? What happens when I dereference a cell? Does it continue to take up memory or is it garbage-collected? What if it's part of a cycle?
But I digress; your point was actually about languages that don't talk about memory, such as the lambda calculus. A degenerate memory model is still a memory model, and an inherently portable one at that. My point is, for languages which do talk about memory, it's necessary that they specify how that memory works in order to be a portable language.
Python has a memory model. It's just too bad that so much of the python ecosystem relies on bindings to C libraries, so you go to resize an image and then you get a segfault that brings down your whole program (real example).
https://www.facebook.com/sedatk/posts/10151240841812644
Sedat Kapanoglu · 2,372 followers
3 hours ago near Maslak, Istanbul
Today I was at the Istanbul courthouse third time this year. I attended a trial defending myself to a judge. Then I bore my testimony to a prosecutor about a different case. Both cases were about the free speech platform I own in Turkey. Meanwhile in the world, one of my older posts in Stackoverflow became Hackernews #1 & reddit/programming #1. I wish it was Turkey which made me feel better about myself, not the rest of the world.