Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Reverse engineering the ARM1, ancestor of the iPhone's processor (righto.com)
126 points by reportingsjr on Dec 31, 2015 | hide | past | favorite | 44 comments


It's quite interesting that a lack of resources ("given the small size of the design team at Acorn, a simple RISC chip was a practical choice") forced them to focus on simplicity. That simplicity then helped the chip use less power, which later left them well-positioned to capture a large share of a nascent battery-powered market.


Indeed! I think their open source business model had a lot to do with their success too.


Possibly relevant - but in any event, interesting throughout - is Sophie Wilson's Computer History Museum interview: http://www.computerhistory.org/collections/catalog/102746190


Somewhat related: Micro Men [1], a semi-documentary/drama movie about Sinclair's rivalry against Acorn computers in late 70s and early 80s. Sophie Wilson herself plays a quick cameo role towards the very end.

[1] https://www.youtube.com/watch?v=hco_Av2DJ8o


They never were in the same space. Sir Clive made things cheap, Acorn made things well.


Their stories are very interconnected, though. If Christopher Curry hadn't worked for/with Sir Clive for 13+ years and learned about electronic calculators, probably he would have never discovered the upcoming wave of personal computers.

Also, it seems that Sir Clive's lack of vision for the future of computers, and his scattered mad-inventor-turned-businessman, were the main reasons for Cambridge Processing Unit Ltd - later Acorn Computers - being created in the first place.

Somewhat, the supercomputer we carry in our pockets these days are a direct consequence of Sir Clive's actions (or inactions).


I'll be more than happy to concede those points. It's a pity that Sir Clive didn't have the eye for quality as much as he had one for being totally original. Combined the two would have been deadly, the man was an absolute visionary.

He's still around (75 now).


Indeed. It seems he got bored too easily; sticking with one industry wasn't satisfying enough for him.

I guess serial entrepreneur would be the buzzword these days.


And Steve Furber's Centre for Computing History interview: https://www.youtube.com/watch?v=ZMEBj3FM2aw


dont forget Hermann Hauser, Acorn cofounder: https://www.youtube.com/watch?v=Y0sC3lT313Q&list=PLQsxaNhYv8...

great stories in this interview


I think it would be fascinating if some of the older fab processes could be democratized and people could start experimenting with chip fab at home. Perhaps some tech out of the 3D printing field would be adaptable. No, you're not going to make something blazingly fast, but imagine printing your own ATMEGA328P or MSP430 or even 8086 clone.


I think running mixed lots of wafers is still a thing. However besides still being very expensive the other two issues are the software tools to lay out the chips. And then the cost of making the masks[1]. Friend of mine was working for a startup five years ago I think he said they needed 2.5 million for a small team + initial fab.

If you are interested you can get into programming FPGA's for a few grand at most. Far as I can tell looking on the outside of people doing both it's almost the same minus the chip layout manufacturing step. Also the design cycle time is hella faster and cheaper.

[1] I have no idea what masks cost now, but 15 years ago the company I worked for needed to change a metal mask and it cost $50k, I think a full set was $250,000.


I'm the founder of a startup taping out of first test chip in May... Getting 100 chips back on a shuttle run (Also known as a multi project wafer, where mask costs are shared by multiple companies) will cost us around $250,000. That is just the cost of these first 100 chips. Mask costs for a modern (eg 28nm) process start at about $2 Million, and go up from there pretty quickly, but is based on a lot of factors. Once the masks are made though, it is roughly $5K a wafer (where you can have 100s of dies on a wafer.


This sounds really interesting. I don't know what too much of this means though. Can you recommend books or in depth articles about the process of ic making?


The Wikipedia articles on semiconductor manufacturing are pretty good. Far as ASIC's, I struggled a while to find a great resource on all aspects of design, verification, and prototyping. Here's the best one I found for you:

http://cc.ee.ntu.edu.tw/~ywchang/Courses/PD/EDA_Chapter1.pdf

On top of that, there's the cost of the photomasks that print the samples for testing (and later production). They range from tens of thousands on oldest nodes to millions on newer ones. Every time you screw up and change the design you buy another mask. Hence, avoidance of new nodes by low volume groups, design/verification tools that can cost $1mil/yr a person, and heavy re-use of components.

One trick that's popular is called multi-project wafers: a mask and chip run that's shared by several people with high cost split among them. This is available through groups like MOSIS and X-Fab. Lets you test your design in [more] affordable pieces. Plus, tooling and overall cost has come down for older nodes which are still highly usable for many scenarios:

http://gsaglobal.org/forum/2009/1/articles_full_double.asp

Open tools are getting there slowly but not reliable or competitive enough. Here's the only open flow I know of:

http://opencircuitdesign.com/qflow/

So, overall barrier to entry for ASIC design is expensive expertise, high-cost of proprietary tools, and cost of masks (or MPW's). Going cheap on all these still gives several hundred thousand for a useful design on a good node. Simpler, single-purpose chips on oldest nodes can be less than that, though. Here's an example:

http://www.planetanalog.com/author.asp?section_id=526&doc_id...

Hope all this helps your understanding of the situation.


So, how is REX coming along on its chip? Anything working and showing benefits yet?


We're aiming to begin disclosing architecture and software details in March, with talks at 3 or 4 major industry conferences between then and August. As I said, we have our 28nm shuttle run booked for May, so baring any major issues, we're aiming for a August/September public demonstration of our first chips.

We have already been evaluating architectural models and FPGA demonstrations with partners for the past couple of months, but we feel its better to wait for real silicon to publicly show benchmarks, but so far so good.


Makes sense. I look forward to that. Curious, could you post what a 28nm run cost you for what percentage? Might be enlightening to people wondering about the real-world cost of such things outside of confusing $/mm2, etc on many sites.


We're all taking 2 weeks off after tapeout, so depending on how much I'll want to work during those 2 weeks, I'm thinking about writing a very long (or series of) blog posts going over the entire process (From moving out to the bay area, going a bit crazy and deciding I wanted to get into semiconductors, to now taping out a chip). The thing I would have loved the most while working on this the past 2 and a half years would have been some first hand account that would have saved me a lot of time at the beginning. Plus you are totally right that there is not much information out there that breaks it down to the rest of tech industry folk.


That's awesome. See, it's stuff like this that's why you're among my favorite chip designers despite being new to the game. ;) Look forward to the write-up.


The older fab processes involved some pretty nasty chemicals. I seem to remember that hydrofluoric acid is used in multiple steps (to make contact cuts through oxide layers?) . To give you a broader idea, a former MOS Technology / Commodore fab ended up as a Superfund site [1].

[1] http://cumulis.epa.gov/supercpad/cursites/csitinfo.cfm?id=03...


We have a couple of similar sites here in NL, projected clean up time in the 100's of years. Very bad legacies from the time when the environment wasn't even on the radar of most governments and companies dumped their waste with abandon.


Recent HN story about exactly this: https://news.ycombinator.com/item?id=10723305


Somewhat on topic, I wonder if Acorn makes even a penny from every smartphone made today (at least the non-intel ones).

Because that would be a lot of pennies. A true UK success story.


Acorn no longer exists, but ARM does charge royalties for their processor designs. Around 1% - 2.5%. They have revenue was £795.2 in 2014, so yes, that pennies and fractions of a penny do add up.

Edit: The cheapest ARM processor I found on Digikey is $0.35, Digikey must be buying them for under $0.15 so that royalties would be under 3/10ths of a cent.


Getting back to ck2's comment, ARM Holdings (the company spun off from Acorn to do ARM development) is in Cambridge, so you can count it as a UK success story. It's interesting that they don't make any chips themselves, but are purely an IP company.


Not many companies actually make ICs themselves anymore. A lot of the bigger IC companies you know are fabless: Nvidia, AMD, broadcom, qualcomm, etc.

It is simply way, waaay too expensive to operate a fab nowadays.


If I understand things correctly, ARM takes things a step further. Not only do they not have a fab, they don't design or sell chips at all. ARM processors are all built and sold by other companies (STMicroelectronics, NXP, TI, etc), using ARM Holdings' IP - the core design (as an abstract netlist) or a license on the instruction set itself.


ARM designs the Core

STM, NXP, Mediatek, etc buy the core design, slap it on a piece of silicon, design peripherals around it (SATA ports, UARTS, SDIO controllers, etc), slap those on the silicon, put interconnects in between, and sell the complete SoC.

This is of course EXTREMELY oversimplified. But the idea is the same: ARM makes the CORE, and others make SoCs out of it. (I am not aware of anyplace you can buy a simple ARM CORE without it being part of an SOC.


ARM designs and licenses chip/core designs, which are then bought by companies that will then further modify them to fit a specific application. They design the chips, but they don't make the entire design.


How do you operate at the level of nvidia or amd without running your own fab? I'm sure it's mind bogglingly expensive, but the marginal return is surely worth it.


Fabs are very expensive. You can see numbers ranging from 4 to 10 billion dollars each. Those are the fixed costs that you now need to spread over production. It looks like Nvidia does around 20 million chips a year. If a $4bn fab was useful for 4 years, then those fixed costs work out at $50 per chip!

If you make other people's chips on the same fab then the per chip costs go down, assuming there is spare capacity. The economies of scale really do help. Which is why there are a few foundries who do most of the chips, and those foundries can spread the costs over a large number of chips including later ones that don't need the newest process. For example Intel is known to do their CPUs on the current process, while doing the chipsets on the older processes.

In other words, work out what set of numbers would make sense, and you won't be able to find any. New processes keep getting more expensive, and competitors adopt them too, so you have to sustain your current fab as well as the work to do the next generation one. This requires a lot of available money.

Don't build a fab unless you are sure you can use almost all its capacity for the next several years.


Wow, I had no idea the fixed cost ran that high. That is very enlightening.


If you've got a successful (i.e. working well) and fully utilized leading edge fab, it can be more profitable than printing bank notes.

However it's so insanely expensive to set up that type of chip manufacture the only way it is really viable now is to share the costs. Samsung, TSMC and Intel (and maybe GF) are the only groups left who can afford it (see http://www.eetimes.com/document.asp?doc_id=1327060 ) and you have to keep up the r+d every year to stay competitive...


Actually, AMD/ATI spun off their fab into a separate business. Makes sense, really. Lets them more easily fab their chips where it makes sense, while still giving them fallback for safety.


Another benefit to not running your own fabs is that you are a bit removed from the rat race to keep your tech up-to-date.

Getting a new, smaller process to run well is expensive and difficult, but gives important benefits. If you don't run your own fabs, you can't jump ahead of others.

But you are also safe from investing a billion or a few in an upgrade and it not working out properly, and you still can get basically the same tech as all the other fabless companies.


They have a very close relationship with their preferred foundries and design chips with a specific foundry and process in mind. The big benefit of going fabless is that you don't have to worry so much about keeping your fab busy enough to cover its expenses. Intel's basically the only company that always sells enough of its own chips to keep its fabs busy and profitable.


Acorn kinda exist but they were rebranded as Element 14 a few years back. In a slightly interesting twist, I own an Element 14 clone of the Beaglebone Black - which is powered by ... you guessed it, an ARM chip (built by TI). You can actually get RISC OS (the original OS for Acorn computers) images for the BBB - http://beagleboard.org/project/riscos


you are thinking about wrong element 14

acorn one was 'Element 14 Ltd', purchased by Broadcom, now Avago.


I always thought that Element 14 was just a Farnell trading brand.


£795M or £795B ?


£795B would be nearly half the GDP of the UK.


I lack knowledge, put this way it looks ridiculous.


M




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: