Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It can be better to copy a little code than to pull in a big library for one function.

I just don't get this. If you statically link in small functions from a big library, you only get the little bit you need anyway. Are they saying you avoid compiling the "big library" over and over? But if it is already compiled, that should not be necessary. And the chances are you are going to be importing lots of "little code" from the "big library" anyway. Unless they are saying the implementation of net's itoa is somehow simplified and not a just a straight code copy...otherwise I don't understand this approach.



> And the chances are you are going to be importing lots of "little code" from the "big library" anyway.

That "big library" is often supported by someone else. One day, a year down the line, you realize startup time for you executables went from 0.01s to 2s because Big Library's initialization code now scans all subdirectories for some caching optimization, and stuff like that - stuff that's a win for general users of biglib, but not for you.

> I just don't get this

This is not about theory, it's about practical experience.

You'll notice that successful C/C++ libraries tend to include everything that's small enough and that they need to function - because dependencies byte you later on. e.g. libav/ffmpeg has its own copy of md5, libtcc has its own slab allocator, gsl has its own everything, everything ships with a copy of libz in case the system libz is borked.

The Go library just starts applying this principle a little earlier - inside the standard library. I suspect that's because they envision a python-style batteries-included extensive standard library, which has no benevolent dictator to put order in it.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: