Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think type conversions need to always be explicit, there just needs to be a lot of careful thought before they are.

Automatic deref would be an absolute pain to live without for example.



Speaking of explicit conversions, it's always painful to have to do explicit conversions of integers. E.g. Have an u8 and want to use it as an index in a Vec? You have to cast as usize. That quickly becomes annoying.


On the contrary, I think that’s an argument against implicit conversions. u8 and usize have wildly different ranges, and treating them as the same could cause some maddening bugs. I can’t imagine there are that many places where you’d need to use a u8 as an index; if there were you could either wrap your data structure in a struct which accepts u8 instead, or use usize more instead...

(Not that I’m claiming to know your code base or specific challenge or anything; I’m just speaking generally)


> I can’t imagine there are that many places where you’d need to use a u8 as an index

I have some firmware on a small machine, there aren't any arrays with more than two dozen elements. On the eight bit machine the the code originally ran on using a 16 or 32 bit int caused a lot of code bloat. You might not think that's a problem but consider the price difference between a processor with 64k of flash and 128k might be a dollar. Times 100,000 units a year.

The above is why I'm not going to use rust anytime soon, because a rust binary size is about 4 times larger than C. That would add about $2-3 to the cost of the product. Or $200-300k a year for no real benefit at all.


That doesn't make any sense. If `i` is a `u8` and `xs` is an array, then `xs[i as usize]` works today.

The criticism isn't even specific to exotic environments. The same reasoning applies at bigger widths too. I've certainly used `u32` in places instead of `usize` to avoid doubling the size of my heap use on 64-bit systems.

Implicit widening would be nice, but it isn't necessary.


In the firmware I mentioned on an 8 bit system using a 16 bit indx increases the resulting code by 2-3X.

A lot of times the code size doesn't matter. In my case it's important. Consider a lowly printf statement in my firmware.

It takes about 120 bytes of code. A trivial amount! Lets see how much that costs us.

Marginal cost of flash is about $1.00/64k. So 120b/64k X $1 = $0.001875 per unit.

We ship 100,000 units per year.

So that printf costs $187 per year.


I feel like you just repeated your previous comment. I don't need a lesson on unit economics. My whole point is that your don't need a 16 bit index.


I think implicit widening is a good idea, but not narrowing --- expanding a u8 into a usize doesn't actually lose any information, but going the opposite way does.


Depends on the machine architecture really but I agree in principle.


If you need to do that a lot then arguably you'd be better off having a variable of 'usize' in the first place.


Would lack of automatic deref be less painful with a "->" operator equivalent, or just postfix deref?

I don't want to say automatic deref is harmful, but a lot of the time I wish I could locally deduce more about the level of indirection.


I would argue Deref coercions are still explicit, because the trait implementations are explicit. There is not a magic mapping of types whose references can be coerced to each other, it is exactly the ones that implement Deref<Target=T>.


Type conversions have to be explicit when they are lossy, otherwise it's just useless noise and cognitive overhead that can lead to a bug.


Type conversions, even non lossy ones, can teach people to use the wrong type, in c++ it's very common to see people use a int in a for loop indexing an array when you should always use size_t for that purpose. This misuse is so widespread that people hardly even know that the size_t type exists. https://www.viva64.com/en/a/0050/ has some nice material about why this matters and the type of bugs this can cause.


Personal opinion, heavy use of int is a code smell.

Ada with 'range' probably gets this right.


Can't they have performance impacts even if they aren't lossy?


I dunno, "newtypes" are a fairly popular patterns, and if they automatically converted between the base type and other newtypes of the base type, they'd not really be useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: