SBCL is together with luajit the fastest dynamic programming implementation there is. It runs laps around things like python and ruby. It does not have a steep learning curve. You become quite capable quite fast,but there is a lot to an implementation like SBCL and it will take a long time to understand all of it.
The metaprogramming utilities of lisp are still unmatched.
Scheme (my daily driver) is a smaller language. It consists of a small set of well chosen primitives that easily compose to build higher abstractions. It is really nice to work with.
Generally they are in the same league. I would expect that SBCL is faster in some benchmarks and applications, but slightly slower in real life Lisp applications. Allegro CL and LispWorks have been used for some very large commercial applications - where the application developers demanded special optimizations for those over the years. The implementors actually don't put too much attention to benchmark performance - but application developers pay for real-life performance.
I would expect that SBCL might have an advantage in some areas where it's easier to write fast code, because of its type inference and compiler hints. That's very useful.
Not that I know of, but LW 64bit and SBCL are comparable in my experience. Allegro I have heard is slightly slower but with a better memory usage (so maybe like CCL?)
Just mentioned them, because many ignore them as they are commercial products, yet I would consider them the surviving ones from Lisp Machine days developer workflows.
So I would assume, parallel to the the graphical developer tooling, they also offer quite good compilers.
The compiler is only 1/3 of the equation. One needs also a fast runtime (memory management, signal handling, implementation of language primitives like bignum arithmetic, ...) and a fast implementation of the core language library - since that is also mostly written in Lisp.
Additionally there are roughly three usage modes for the larger CL implementations:
1) interpreter, often used for development/debugging
2) safe compiled code with full error checking and full generics, often with lots of debug info
3) optimized code, with various degrees of unsafeness, with no debug information, often with limited or no generics
Often applications are a mix of 2) and 3), where often 3) is limited to the portion of the code that actually needs to be VERY fast. This means, that the large majority of the code is fully safe and fully generic code -> thus it has a lot of influence how fast the language/application feels.
For example when one uses CLOS (providing a lot of generic and extensible machinery), a CLOS implementation will use a lot of caching to make it run fast -> caches cost memory. Now, when one starts a CLOS-based application these caches need to be computed and filled - which might make the application at start up feel a bit slow or sluggish. So it makes sense when generating an application to save it with the caches pre-filled - then at application start, caches are simply already loaded and there is no performance hit at startup. That's not something one can see in a simple micro-benchmark or which depends on the 'compiler' (the part which compiles code and generates the machine code) - that's performance from the wider CL system architecture -> one needs to be able to provide that to CLOS applications to improve user acceptance.
The metaprogramming utilities of lisp are still unmatched.
Scheme (my daily driver) is a smaller language. It consists of a small set of well chosen primitives that easily compose to build higher abstractions. It is really nice to work with.