Maybe then they can help us with the Capstone[1][2] disassembly engine auto-sync (automatic synchronization from the LLVM TableGen files) effort[3]. ARMv7, ARMv8/9, PowerPC are nearly finished, and MIPS is in near-term plans. Nobody stepped in for RISC-V yet.
Nice to see Rivos mentioned here, even having a vice chair position. Besides this, the only other place they have appeared in public is in a few kernel patches. The fact that they could get vice chair shows that they either have a lot of funding or have demoed something impressive to the other companies listed here like Google or Samsung. Not surprising per se as they have a lot of former PA semi/Apple employees but still nice to see something showing they aren't vaporware.
Rivos also has been regularly showing up with talks in RISC-V events.
Whatever they're developing, they are doing in secret. Based on them having their strong teams, we can expect a very high performance RISC-V implementation.
I don't advice people to expect anything from Roma. Looks dead. And it's expensive. And full of crypto buzzword.
On the other hand there is already the PineTab-V (using JH7110 SoC, same as VisionFive 2, Star64, Milk-V Mars) and Sipeed are planning a laptop using their already-shipping Lichee Module 4A (the CPU daughterboard on the Lichee Pi 4A) which uses the same TH1520 SoC as the Roma is supposed to -- but Sipeed will ship (they have a great track record) and their price will be $300 to $500, not $1500.
Probably won't be too fast, but now there's an RPi 3-shaped RISC-V SBC coming soon[1] you should theoretically be able to use a kit like the CrowPi 2[2] to build a laptop out of one, potentially.
Cool. I always thought RISC-V could use a Linaro type of organization to help make it easier for people to build and develop on RISC-V systems. Sounds like this is it.
A "chance"? Maybe, but would there be anything to gain from such uniformity? Big CPUs are made by big teams, just like software. And you try to maximize re-use of vetted recipes, i.e. no need to reinvent power management. I would assume that for specialty backstage processors the core that the specialist team already knows is the one that gets used. Even if the backstage is disclosed, no code will ever be shared between the main cores and the support cores.
I’ve rarely looked at the output of a compiler and thought “that’s great!” but on the other hand, the fact that I’ve rarely looked at the output of a compiler at all seems to indicate that they’ve got their priorities straight.
It seems that major compilers, gcc and clang are now suffering some sort of "planned obsolescence": ISO C is always adding stuff, then somebody will manage to use the new stuff in some critical system components and will force everbody to upgrade. It is even more accute with gcc extensions and linux for instance. It seems ISO C planned obsolescence does happen on a longer time cycle than gcc extensions for linux code for instance (or the glibc, or gcc but even worse: c++11 c++14 c++17 c++489374892347892374238, c++ is orders of magnitude more planned obsolesence troubles than C).
Then actually, they could generate amazing code, I would not care less: I am so much angry because of this manic and systemic planned obsolescence, I now try to code everything in assembly (avoiding grotesque assembly code generators and absurd usage of the macro preprocessor).
I write x86_64 assembly code, but I am confident porting to risc-v should be brutal but not that hard... and I kind of know, it will happen.
That's a compelling question. So much so that I imagine someone has already investigated this. I might go seek out such work, but whatever I find is likely to already be out of date...
I'm starting from the assumption that the vast bulk of hand optimized assembly does not elaborate on the thinking behind the implementation in a way that can be processed through training. Certainly not assembly obtained by reverse engineering. Assuming I'm correct, does this mean there is a dearth of training material available for building such a model?
It would seem that first one would need a model that could consume arbitrary assembly and somehow discover it's intent, generate 'labels' (apologies if I'm misusing terms here,) and they feed the output into another model.
Weird, I will never presume it, only suspect it would. I tend not to be over confident in compilers: what I presume is that compilers inject "convenient bugs" into machine code, which are not in the compiled language but it the end can be exploited. We all know that "software security audits" is about finding "holes" in some machine code, then propagate the fix into the compilers (and often the compiled language). If the machine code does change, the audit must be done all over again.
But, where assembly is better: near zero planned obsolescence from compilers (since there are none), which is really a pain on the medium/long run, REALLY!
Writting assembly is not really about speed anymore nowdays, it is more about independence from those grotesquely and absurdely massive and complex compilers.
Since I am more and more comfy at writing assembly, it is usually kind of less of a pain to write directly assembly than to refactor and comment compiler assembly output.
I'm slightly out of the loop, but don't modern compilers often actually skip real assembly and go from source code to an intermediate representation (which is like asm but not quite the same) to binary? Or is that aside your point?
There have always been compilers that skip the "make assembly source" and go straight to object files - in fact the first time I saw unix V6 cc I was a bit scandalised (but it made sense for a system where programs had to be <= 64k in size)
This is pretty rich coming from companies like NVIDIA and Google, both having histories of releasing barely portable, if even portable at all, code.
While I can compile and run ArcticFox on my AlphaServer with zero work, I dare anyone to try to compile Chrome (or Chromium) for any non-mainstream architecture, whether it's sparc64, earmv6hf, MIPS, RISC-V, whatever. I'll wait ;)
Where did you get the binaries? apt doesn't have any for me. If you built from source, did you build the master branch, or a fork? I'm trying to get either firefox or chromium running on my visionfive 2 but I'm not having that much luck...
Thanks a lot for the pointers. As of now, I'm running the official Debian image. How far along is arch? Has it caught up with Debian, or even overtaken it?
Be careful that the "official Debian image" from StarFive is based on a "debian snapshot" from summer last year. I consider it more of a demo system than something to use in the real world.
Yes, the RISC-V software ecosystem has made significant progress during the last year. It has also made significant progress since VisionFive2's started shipping January, as it was the first mass-produced cheap RISC-V SBC.
I have experienced that ecosystem acceleration (it is not my first Linux-capable RISC-V hardware). Software got better really fast ever since VF2.
Userspace-wise, that Arch I linked just works. I used the debian image to unpack their root tarball and set things up via chroot.
You need to understand the standard RISC-V boot process, and particularly how to configure u-boot. I assume you have serial port access, else I recommend grabbing a cp2104 based usb to ttl serial adapter from aliexpress (~$2). It is very hard to interact with u-boot and/or debug boot issues without.
The kernel is what gets difficult. You need a kernel that supports the VisionFive 2. I did compile mine from one of the trees maintained by people in the forum.
StarFive managed to get a lot of code merged upstream[0] in a short amount of time, but you'd be missing important features such as video output if you use that.
Once more patches get upstreamed and the UEFI effort gets console support (video output and usb keyboard), boot should be as easy as any modern PC, with Linux distributions generic install media simply working outright.
[1] http://www.capstone-engine.org/
[2] https://github.com/capstone-engine/capstone
[3] https://github.com/capstone-engine/capstone/issues/2015