It's probably a false sense of security though. If runes are in fact code points, thats about the least helpful abstraction. They do not encode grapheme clusters meaning some items, such as emoji, may be represented by multiple runes, and if you split them along the boundary, you may end up with invalid strings or two half-characters in each substring. You can't really do much with code points -- they're useful when parsing things or when implementing unicode algorithms, and not much else.
If you care about grapheme clusters, it doesn't mean using characters is wrong: you can layer a higher level API on top of characters.
For example, if you want to encode emojis some non-Unicode abstract description goes in and a sequence of characters go out, and if you want to figure out the boundaries of letters clustered with the respective combining diacritical marks they are subsequences in a sequence of characters.
> If you care about grapheme clusters, it doesn't mean using characters is wrong: you can layer a higher level API on top of characters.
Be very careful here; when talking Unicode, there is no such single thing as a "character", and people being insufficiently precise here has been the source of much confusion.
"You can't really do much with code points -- they're useful when parsing things or when implementing unicode algorithms, and not much else."
That is exactly what "rune" is intended for, though. In the several years I've been using Go, I believe I've used it once, precisely in a parsing situation.
For the most part, Go's answer to Unicode is to just treat them as bytes, casually assume they're UTF-8 unless you really go out of your way to ensure they are something else, and to not try to do anything clever to them. As long as you avoid writing code that might accidentally cut things in half... and I mean that as a serious possible problem... it mostly just works.
To be honest, if you're trying to tear apart this example emoticon at all, you're almost certainly doing something wrong already. The vast bulk of day-to-day [1] code most people are going to write should take the Unicode characters in one side, and just spit them out somewhere else without trying to understand or mangle them. A significant amount of the code that is trying to understand them should still not be writing their own code but using something else that has already implemented things like word tokenization or something, or performing operations using Unicode-aware libraries (e.g., implementing a find-and-replace function can basically do a traditional bytestring-based replacement; code to replace "hello" with "goodbye" will work even if it has this emoticon in the target text if you do the naive thing). What's left is specialized enough to expect people to obtain extra knowledge and use specialized libraries if necessary.
A lot of what's going wrong here is putting all these things in front of everybody, which just confuses people and tempts then into doing inadvisable things that half work. In a lot of ways the best answer is to stop making this so available and lock it behind another library, and guide most programmers into an API that treats the strings more opaquely, and doesn't provide a lot of options that get you into trouble.
It isn't necessarily a perfect solution, but it's a 99% solution at worst. I write a lot of networking code that has to work in a Unicode environment, so it's not like I'm in a domain far from the problem when I say this. It's just, 99% of the time, the answer is, don't get clever. Leave it to the input and rendering algorithms.
[1]: I say this to contrast things like font rendering, parsing code, and other things that, while they are vitally important parts of the programming ecosystem and execute all the time, aren't written that often.