Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Enabling Rust on Embedded Platforms – Linux, RTOS, Bare Metal (github.com/rust-embedded)
235 points by ingve on Oct 16, 2017 | hide | past | favorite | 50 comments


For being able to run Rust on Bare Metal I thoroughly enjoyed Philipp Opperman's blog series. [0]

Although admittedly I only used his assembler tips so I was able to create a minimal kernel to test for VMware's new desktop requirements on Workstation 14/Fusion 10 where a CPU has to have "VMX Unrestricted Guest" support for their product to be able to run VMs.

Thanks to his blog I was able to create a small bootable cd image to test for Unrestricted Guest features on intel CPUs. [1]

Finally if you want to see Rust as an OS, it already exists! Have a look at Redox [2]

[0] https://os.phil-opp.com/multiboot-kernel/

[1] https://communities.vmware.com/message/2709089#2709089

[2] https://www.redox-os.org/


I've been using japaric's awesome work on Xargo (for cross compiling the std lib) to target a variety of ARM Cortex'es. Sometimes doing all work in Rust, sometimes just for linking against Rust crates in a C program. It's a breeze and super fun.

This is, in my opinion, where Rust really shines: using one codebase - without installing any new tools - I can target pretty much all platforms from embedded to browser to mobile.


Also enjoying https://github.com/japaric/svd2rust - Generates simple uniform register mappings for various cortex-m and msp430 micros.


I was disappointed when I tried to use svd2rust for the atmel-provided svd's. They unfortunately do not provide reset values for some registers, which causes svd2rust to panic.


I was able to make a hacky fix for that. If you get the `svd2rust` source code and modify `generate.rs` on line 756 you can get it to generate a library file and not panic.

There are still a ton of warnings that pop up, and I haven't yet tested it on an MCU but in theory it should work.


I did exactly the same thing, but I'm not totally happy with it. Good for fooling around with, but not convinced I'd want to push something using it into production without a bit of further investigation.


I think the standard approach for crates to work around problems like this is to patch the vendor's XML. Terrible that it's necessary, but workable. The "STM32F103xx.patch" in japaric's stm32f103 support crate [1] seems a decent example of the sort of hacks that are standard. [1] https://github.com/japaric/stm32f103xx/blob/master/STM32F103...


Yes! To me this is an area where rust could really shine. Embedded development in c is chalk full of headaches that rust has the right medication for: package mgmt, code reuse, safety. I would love to see more people building packages and focusing on making rust great for embedded development.


For rust on bare metal target, it is also worth looking at: http://blog.japaric.io/quickstart/ and other work by Jorge Aparicio - https://github.com/japaric.


Is anyone else concerned about Rust’s handling of heap allocation failures? Right now the existing APIs for heap allocation are built pretty deep into the language and they all panic on failure. It’s one thing for Firefox to crash when my computer completely runs out of swap, but another for Linux to panic when it’s out of heap space.


The allocation most built in to the language is https://doc.rust-lang.org/std/heap/trait.Alloc.html and that returns a Result. Some std APIs like Box and Vec do just panic/abort on allocation failure (although that's changing), but these are all library types defined in std, neither the language itself nor the lower level core library (more relevant to embedded platforms) do any allocation.


Unless I'm mistaken, the little-b `box` keyword panics on failure. Isn't that required to construct a type in place in heap memory?


The box keyword isn't in stable Rust yet, and isn't even "the box keyword" anymore. https://github.com/rust-lang/rfcs/blob/master/text/1228-plac... is the latest RFC I believe, but even that I believe is changing since the latest allocator API work, though I'm not 100% sure.


My understanding is that the latest RFCes touching that syntax (not sure if they're implemented) have it overloaded with traits, and I would guess/hope that it's possible to implement those traits in a way that is fallible (e.g. hypothetically 'impl BoxSyntax for Result<Box<T>, AllocationError>').


Yes.

The issue is actually being worked on. For example, the draft RFC for fallible collection allocations is now in final comment period [0].

Edit: fix link

[0] https://github.com/rust-lang/rfcs/pull/2116


they all panic on failure. It’s one thing for Firefox to crash when my computer completely runs out of swap

Actually most large allocations in Firefox are fallible and the browser will only crash when it can't deal with an allocation in security-sensitive code. So that's a terrible example!


What's wrong with panic? Handle it, if you want. How does Linux responds when it can't allocate something from the heap?


Trying to catch panics in Rust is best-effort only: see https://doc.rust-lang.org/std/panic/fn.catch_unwind.html . It's pretty strongly discouraged for general exception handling. You can check, but I have a feeling that stack unwinding requires some heap allocations, in which case an OOM panic will cause a double panic and force an abort, and thus cannot be caught.

The proper solution in Rust is to use the Result type.

As for what Linux does: it will invoke the "OOM Killer" which will kill user processes to clean up space.


"As for what Linux does: it will invoke the "OOM Killer" which will kill user processes to clean up space."

Which is also pretty catastrophic. Honestly, I'm not sure which I would prefer more: a debuggable kernel panic that would be easy enough to recover from, or a still-running but possibly lobotomized machine.


There are a lot of options available, some outlined here: http://www.oracle.com/technetwork/articles/servers-storage-d...

You can configure the OOM killer to simply panic on OOM. You can also control which processes are prioritized for OOM killing, as the sibling comment mentions.

Not mentioned in the article is that you can set per-process memory usage limits, where subsequent calls to malloc() will return NULL if trying to allocate more. Some applications will try to properly handle that condition, many will just segfault or do some kind of controlled abort. For a lot of applications failing is the right behavior on memory allocation failure. I mean, presumably you weren't trying to allocate that memory on a whim, you need that memory to function properly! So there really isn't any sane way to continue functioning without either disabling functionality or hanging until memory becomes available. Either way, I generally prefer something to unambiguously fail so I can restart it than have a "gray failure" where the system tries to keep limping along.


In a decent amount of cases this will kill the offending process taking up too much memory, and then your service manager (systemd etc) will restart it. If problem was due to a memory leak etc this typically recovers the situation for a while. Of course the OOM killer might kill the wrong process, which can be annoying. This can be tweaked on a per process basis using /proc/$process/oom_score_adj or using OOMScoreAdjust in the systemd service file


As the sibling says, catching panics isn't as reliable as Result for error handling, and, anyway, the parent was slightly wrong: the standard library types like Box and Vec abort on allocation failure, not panic.


I took a look at the gpio code, just to see what Rust for embedded would look like.

I did't like the noise due obrigatory use of unwrap() after calling a function with constants (such as constructing a regex). Such setup obviously must panic when it fails.

Also, Rust seems to be missing a way to "clean out the parent scope", by saying something like:

  let Some(a) = check(param0)  &&  let Some(b) = check(param1) 
  else { 
      return Err("Illegal arguments"); 
  }
Here 'a' and 'b' would be created in the parent scope, not in the body as in the 'if let' statement.

Note that both criticisms revolve around introducing a concept of failure in the type system. Maybe the last member of a type could correspond to "failure" by default (ie None or Err() in typical cases).

I have been recently learning Rust and like its super powers. The above observations worry me in the sense that error handling is quite central to clean and readable code, and because Rust seems to promote the nesting of the main path, which, at least to me, is an anti-pattern.

Please correct me if I'm wrong.


You're right - that would suck if you couldn't do that!

In rust, damn near everything is an expression - including `if` statements. So:

    let a = if let Some(x) = check(param0) { x } else { return Err(""); };
Oh - I couldn't tell if you were aware by your post, but if you have a return type `T` and you want to compose it with an error type, you can use the type Result<T, ErrType>. This is functionally equivalent to having this:

>Maybe the last member of a type could correspond to "failure" by default (ie None or Err() in typical cases).

as a language-level feature, except it's opt-in. So functions that return `T` instead of `Result<T, _>` never return errors (besides panics, which are more-or-less 'fatal'), and that fact is obvious to the caller. That's really nice IMO.


> let a = if let Some(x) = check(param0) { x } else { return Err(""); };

That's not DRY at all.

BTW, by stating that Rust has no concept of failure, I meant to say the compiler doesn't have one, as it doesn't know that Err() in a type conveys an error. To automatically unwrap() something, I think it would have to know.


This line is much more concise as

  let a = check(param0)?;
or possibly

  let a = check(param0).map_err(|_| Err(""))?;
depending.


Wait does that work? Unwrapping Options using `?`?


Oh, I missed that it's an Option.

That means that it won't work today, but https://github.com/rust-lang/rust/pull/42526 landed, and is in beta. This means on the next release of Rust, code like this will work:

    fn try_result_some() -> Option<u8> {
        let val = Some(1)?; // Ok also works
        Some(val)
    }
that is, using `?` on an Option in a function returning an Option.

That being said, similar code will not yet work:

    fn try_result_some() -> Result<u8, Box<Error>> {
        let val = Some(1)?;
        Ok(val)
    }
that is, using `?` on an Option in a function returning Result. This is because https://doc.rust-lang.org/nightly/std/option/struct.NoneErro... isn't stable yet.


I think what he is asking for is Swifts guard.


You can use a tuple in that case, as in:

    if let (Some(a), Some(b)) = (check(param0)), check(param1)) { ... } else { ... }


> I did't like the noise due obrigatory use of unwrap() after calling a function with constants (such as constructing a regex). Such setup obviously must panic when it fails.

Once we get real const evaluation, which huuuuuge steps toward were taken last week, this stuff will go from "kinda awkward" to "very good", that is, that kind of thing can run at compile time, and you'll know that the runtime code isn't calling unwrap at all.

The underlying stuff here (miri) also has really interesting implications for unsafe code and tooling around it, but that's slightly further off.


> which huuuuuge steps toward were taken last week

I follow Rust pretty closely, but I can't wrap my finger on what you're referencing here. Got an RFC link? :)


It's not about an RFC; it's that at the impl period days at RustFest, Oli managed to get Miri integrated into the compiler and bootstrap for the first time. https://github.com/rust-lang/rust/pull/45002

https://github.com/rust-lang/rfcs/blob/1f5d3a9512ba08390a222... is the relevant historic RFC; still not yet stable. This work is what will enable a follow-on RFC for "real" CTFE in Rust, which in my understanding has been waiting for this to shake out. https://github.com/rust-lang/rust/pull/25609 landed right around 1.0, two years ago, but real CTFE required waiting on MIR, which took a while to land, and then building Miri in the first place.


Context: MIR is an intermediate representation used by the Rust compiler; it's the final stage of Rust code before being converted to LLVM IR. Miri is an interpreter for MIR; it bypasses LLVM entirely and allows for direct execution of a certain subset of Rust (specifically the subset of code that can be evaluated at compile-time). Together this greatly expands the scope of constant evaluation in Rust code, granting Rust new metaprogramming faculties without requiring any new DSL (e.g. no need for C++-like template metaprogramming); just use the `const` keyword to run code at compile-time.


How well would MIRI work as a generic virtual machine for fairly simple stuff. Essentially I want to compile some type-safe code from a file and run it in a safe way inside of another Rust application.

I'm looking at various lua bindings as well as https://crates.io/crates/wren for this purpose, but I'm wondering if something like miri would work better since another path for compiling this language when real-time code reloading or sandboxing is not required is to translate it to Rust code and compile it with rustc, likely with a compiler plugin.


That's a good question that I don't have the answer to. I'm only passingly familiar with Miri, so I can't say for certain what its intended scope is. Historically it's been developed in this repo: https://github.com/solson/miri , so you could try opening an issue there and asking, though it seems as though it might soon migrate to the official rustc repo (if it hasn't already). You could also try asking around in the Rust channels (try #rust-internals) on irc.mozilla.org to get in touch with the Miri developers directly.


Is there already support for the ESP8266? I started working on it, but I didn't have time to even begin to learn the necessary fundamentals. If it's still not done, I might take it up again.


Mainline support (in Rust proper) for ESP8266/ESP32 would require an Xtensa LLVM backend. There was some rumours about it being developed by ESP devs, but later cancelled. Not aware of any serious other attempts? I did find this, which has a single commit (but quite a lot of code): https://github.com/jdiez17/llvm-xtensa

Without an LLVM backend, could the alternative Rust implementation mrustc that compiles Rust to C. It seems people are working on this for the ESP and having some success: https://github.com/emosenkis/esp-rs


Yeah, my idea was to write an Xtensa backend, but I think I'll need a few months just to understand enough of LLVM. Thanks for the links.


We started porting it for the ESP32 but still are at a very early stage


I have been waiting for support for ESP8266/ESP32, but it doesn't look like it is there yet. I'd really like to use rust to program my ESP devices. C/Arduino works just fine, but it would give a great excuse to learn rust.

Do you have a Github link for this project? I'd like to star it and help out where I can.


Sorry, I didn't even get to the point of committing stuff :)


Coming from professional environment, my only question is: where's FreeRTOS integration?


Offtopic, but does anyone know what's going on with the FreeRTOS license? They add a GPL incompatible restriction to benchmarking, but they also use license GPL v2 "or later", which allows the users to remove such restrictions. Do they have a clue about their licensing model?


They don't have a clue, no, they're mostly just turbomad about having poor benchmark results published in the past.


"The FreeRTOS GPL exception text follows:

"Any FreeRTOS source code, whether modified or in it's original release form, or whether in whole or in part, can only be distributed by you under the terms of the GNU General Public License plus this exception. An independent module is a module which is not derived from or based on FreeRTOS.

"Clause 1:

"Linking FreeRTOS with other modules is making a combined work based on FreeRTOS. Thus, the terms and conditions of the GNU General Public License V2 cover the whole combination.

"As a special exception, the copyright holders of FreeRTOS give you permission to link FreeRTOS with independent modules to produce a statically linked executable, regardless of the license terms of these independent modules, and to copy and distribute the resulting executable under terms of your choice, provided that you also meet, for each linked independent module, the terms and conditions of the license of that module. An independent module is a module which is not derived from or based on FreeRTOS.

"Clause 2:

"FreeRTOS may not be used for any competitive or comparative purpose, including the publication of any form of run time or compile time metric, without the express permission of Real Time Engineers Ltd. (this is the norm within the industry and is intended to ensure information accuracy)."

Yeah. Nice.

Without pondering it too much (& IANAL), clause 1 seems ok, but 2 is going to contradict the GPL. Calling their license "GPL plus exceptions" is probably going to get them yelled at by the FSF.


I have not used it yet, but maybe: https://github.com/hashmismatch/freertos.rs ?


I’m currently working in my very limited free time on Rust targeted at the CMSIS RTOS v1 interface. I think that may be as close as is reasonable to expect. The Rust stdlib is pretty agnostic, it just expects certain primitives to be implemented. FreeRTOS is one way to accomplish those primitives amongst many.


Appears to be mostly FFI wrappers to Linux syscalls?


this is awesome. there's also progress being made on compiling to nvptx so soon-ish it will be possible to write CUDA kernels directly in Rust.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: