A different development board with a familiar architecture (e.g. ARM Cortex etc) is a matter of loading drivers for all the parts connected on the board. For that you create a 'device tree' which is a blob of text similar to javascript or json data definition, with many particular conventions for pins, addresses and busses. See any document on Device Tree formats e.g. from Freescale.
It gets compiled into a .dtb blob which is 'device tree binary' which is flashed into the board next to the os. The bootloader or uboot image expects it in a certain place, loads it into memory and provides it to the booting kernel.
Of course you need to know about the device tree, to use most devices. That's why you have to have an extra step to load - the bootloader is a flash image that has hardcoded a tiny subset of device information just for initial load.
That's all I can say without launching into chapters of details!
Device Trees seem like such a step backwards compared to how it is on x86_64 boxes, where searching and discovering all your hardware has long since been standardized.
PCs are far more functionally standardised than embedded systems. And in fact modern PCs do use something similar to, but in fact far more complicated than, DTs - ACPI tables which not only contain static data but also callbacks to let system firmware do some of the work. Even on a PC there are still many things that are not on discoverable buses but I2C or SPI.
The only reason it seems simpler on PCs is because the ACPI tables are supplied by the hardware vendor so they're mostly invisible to the end user. But that only works because they're mass produced and the variability between different manufacturers is fairly limited. Microsoft and Intel specified what the system firmware must do and enforced it through certifcations. Most PC firmware is also only tested for the case of booting Windows and Linux often emulates bugs...
The same wouldn't work for embedded systems which need far more flexibility.
Yes some ARM systems these days use UEFI and ACPI but that is for server hardware which is basically a PC where the x86_64 processor has been replaced by ARM and it is desired to have it otherwise work the same.
Those tables are often wrong. That can cause all sorts of issues and a big reason why Linux tends to have sleep issues and why windows modern standby exists. When that happens you're either maintaining workarounds on the kernel side or patching the acpi table yourself (terrifying, see this example [1]). The hope is that the end user will eventually apply firmware updates that might not exist for months or years. With device trees, the kernel applies a patch and it rolls out with the next update.
The main thing is there's relatively little incentive to standardise: the kind of hardware you write device trees for almost always isn't the kind of hardware you sell to customers to just load whatever OS they fancy on it, it's generally a special-purpose device which is intended to run one software stack. The fact that there's even a standard like devicetree is basically just the linux maintainers trying to avoid hardware companies writing a million custom initialisation routines. (AFAIK there is also some standardisation around providing a devicetree to the OS from the bootloader for ARM servers, where there is such an expectation from customers, though ACPI is also often used)
I have a calculator that uses a SuperH CPU, which is already supported by Linux, but drivers for things like the display won't be. It only has 8MB of RAM and 16MB of storage which could be restrictive. Would porting Linux to it for fun/learning be a viable thing to do as someone who knows some C and has done some low level programming, but hasn't worked on Linux before?
A different development board with a familiar architecture (e.g. ARM Cortex etc) is a matter of loading drivers for all the parts connected on the board. For that you create a 'device tree' which is a blob of text similar to javascript or json data definition, with many particular conventions for pins, addresses and busses. See any document on Device Tree formats e.g. from Freescale.
It gets compiled into a .dtb blob which is 'device tree binary' which is flashed into the board next to the os. The bootloader or uboot image expects it in a certain place, loads it into memory and provides it to the booting kernel.
Of course you need to know about the device tree, to use most devices. That's why you have to have an extra step to load - the bootloader is a flash image that has hardcoded a tiny subset of device information just for initial load.
That's all I can say without launching into chapters of details!