This is meant to be an introduction though, right? You can simply write “some people do X, and others claim Y is better” then move on.
I read several paragraphs of the article and I still don’t know why you’d use one, despite taking computer architecture and analog electronics courses in undergrad.
I don’t want to read about logic gates again and I don’t want to read about the nuances before I broadly understand what the point is.
For anyone else still wondering, here’s Wikipedia:
> FPGAs have a remarkable role in embedded system development due to their capability to start system software development simultaneously with hardware, enable system performance simulations at a very early phase of the development, and allow various system trials and design iterations before finalizing the system architecture.
Basically, rapid prototyping I guess. That makes sense.
If that was an ask for a specific example, one of the most common uses for FPGAs is DSPs. Say you have a simple FIR filter of, say, 63 taps. To do this in a CPU requires you to load two values and do a multiply/accumulate for each tap in sequence. Very (!!) optimistically, that’s about 192 instructions. With an FPGA, you can do all the multiplications in parallel and then just sum the outputs - probably done in 2 cycles and with pipelining your throughput could be a sample every clock.
If the FPGA is too slow, too power inefficient etc you can (if you have the money!) take the same core design and put it in an ASIC. The FPGA provides an excellent prototyping environment; in this example you can tune the filter parameters before committing to a full ASIC.
> multiply/accumulate for each tap in sequence. Very (!!) optimistically, that’s about 128 instructions
This is what all those vector instructions are for.
FPGA is kind of invaluable if you have lots of streams coming in at high megabit rates, though, and need to preprocess down to a rate the CPU and memory bus can handle.
Yes, indeed :) Didn’t want to muddy the waters with vector instructions, and it’s fair to say that the dedicated DSP chip market has been squeezed by FPGAs on one side and vectorised (even lightly, like the Cortex-M4/M7 DSP extension) CPUs on the other.
You can do the multiplications in parallel but summing 63 values in one clock is not going to work that well. You would almost certainly want more pipelining, though with an FIR you can do this without increasing latency.
What do you mean by "you". Maybe "you" as in a general consumer don't need an FPGA, but I guess one could argue a general consumer doesn't need a general purpose computer either.
There are certainly many use cases where you absolutely do need an FPGA, i.e. anything were you need to process large amount of IO in realtime. For example the guys from simulavr (talk about how they use an FPGA for display correction) here: https://simulavr.com/blog/testing-ar-mode-image-processing/
Many modern devices would not function without FPGAs
> anything were you need to process large amount of IO in realtime.
I'm working on a FPGA-based system right now. We're using an FPGA precisely because this is what we're doing -- about a hundred I/O ports that have to be processed with as little latency as possible.
(SimulaVR dev) It's not wrong to say that in most cases, tasks are better solved without an FPGA.
But when you need one you need one (or an ASIC if you have the volume and don't need reconfigurability)
I suggest it purely for educational purposes. The first struggle isn't identifying the best use case - its understanding wtf is going on. Putting it in terms of something more familiar is helpful for that.
Your thing would make for a wonderful followup topic though.
I think the problem is identifying cases where you really need an FPGA. Most of the time you don't.