Speaking of explicit conversions, it's always painful to have to do explicit conversions of integers. E.g. Have an u8 and want to use it as an index in a Vec? You have to cast as usize. That quickly becomes annoying.
On the contrary, I think that’s an argument against implicit conversions. u8 and usize have wildly different ranges, and treating them as the same could cause some maddening bugs. I can’t imagine there are that many places where you’d need to use a u8 as an index; if there were you could either wrap your data structure in a struct which accepts u8 instead, or use usize more instead...
(Not that I’m claiming to know your code base or specific challenge or anything; I’m just speaking generally)
> I can’t imagine there are that many places where you’d need to use a u8 as an index
I have some firmware on a small machine, there aren't any arrays with more than two dozen elements. On the eight bit machine the the code originally ran on using a 16 or 32 bit int caused a lot of code bloat. You might not think that's a problem but consider the price difference between a processor with 64k of flash and 128k might be a dollar. Times 100,000 units a year.
The above is why I'm not going to use rust anytime soon, because a rust binary size is about 4 times larger than C. That would add about $2-3 to the cost of the product. Or $200-300k a year for no real benefit at all.
That doesn't make any sense. If `i` is a `u8` and `xs` is an array, then `xs[i as usize]` works today.
The criticism isn't even specific to exotic environments. The same reasoning applies at bigger widths too. I've certainly used `u32` in places instead of `usize` to avoid doubling the size of my heap use on 64-bit systems.
Implicit widening would be nice, but it isn't necessary.
I think implicit widening is a good idea, but not narrowing --- expanding a u8 into a usize doesn't actually lose any information, but going the opposite way does.
I would argue Deref coercions are still explicit, because the trait implementations are explicit. There is not a magic mapping of types whose references can be coerced to each other, it is exactly the ones that implement Deref<Target=T>.
Type conversions, even non lossy ones, can teach people to use the wrong type, in c++ it's very common to see people use a int in a for loop indexing an array when you should always use size_t for that purpose. This misuse is so widespread that people hardly even know that the size_t type exists. https://www.viva64.com/en/a/0050/ has some nice material about why this matters and the type of bugs this can cause.
I dunno, "newtypes" are a fairly popular patterns, and if they automatically converted between the base type and other newtypes of the base type, they'd not really be useful.
Automatic deref would be an absolute pain to live without for example.