Why wouldn't we just specify these things with programming languages though? The purpose of a programming language is to enable precise specification of computation. SPEC.md is 497 lines, the tests another 500. I don't understand what we'd gain from taking a spec which is imprecise and tests, which seemingly are arbitrary and cover the same cases multiple times, putting them into an LLM and recreating a version of the computation you desire.
If an LLM can do this with English and arbitrary test cases why wouldn't you pick a programming language and specific test cases? This would give you significantly more repeatability and consistency all while probably having less overall lines of code.
You can define them in a structured way that's not tied to a specific programming language. Imagine a test suite that's entirely YAML inputs and outputs, or JSON, or even CSV.
The key idea is to have one test suite/specification that multiple implantations in different languages can share.
For a formally (in other words, reliably) verifiable implementation, you not only need a formal specification for the library, but also a formal specification of each targeted programming language. And we largely don’t have those.
What is the advantage of that over programming languages though? At some point you’re just creating a new specification language which needs to be learned. If an LLM can go from English spec to Python unit tests why not just start with, or at least distribute, Python unit tests. A programming language will allow you to be significantly more correct and consistent than English.
Because if the tests are in Python the LLM still has to convert them from Python to Ruby or whatever, which leaves room for mistakes to creep in.
If the tests are in YAML it doesn't need to convert them at all. It can write a new test harness in the new language and run against those existing, deterministic tests.
My point is that to create a specification you need to use a formal language of some kind. In this example they created a new yaml based specification language. Why do that vs use a well documented existing formal language the LLM knows well like Python. The translation is either yaml -> new language or Python -> new language. The translation is happening in both cases.
The advantage I can think of is it would might be more human readable but Python is damn close to pseudocode. It’ll likely always be a bit annoying to write because it has to be a formal language.
When told "use red/green TDD to write code for this in Ruby", a coding agent like Claude Code will write a test harness in Ruby that loops through all of those YAML tests, run it and watch it fail, then write just enough Ruby that the tests pass.
Yea I guess we're having a definitional disagreement here. To be clear I think this is a good idea and the work you've done using tests from projects to have agents translate libraries is awesome.
But to me clearly that YAML snippet you provided is a specification which needs to be translated to Ruby as much as Python would. If the equivalent Python is:
The YAML is no more clear than the Python, nor closer to Ruby. Honestly I think it's less clear as a human reading it because it's hard to tell which function is being tested in context of a specific test case. I guess it's possible Claude is better at working with the YAML than the Python but that would be a coincidence I think.
>whenwords provides five functions that convert between timestamps and human-readable strings, like turning a UNIX timestamp into “3 hours ago”.
Ah, so you're the person responsible for this brain rot. Could you please DM me your name and address? I think the 20-to-life I'll get will be well worth it.
I've been exploring this pattern recently too. Giving current coding agents an existing conformance or test suite and telling them to keep writing code unto the tests pass is astonishingly effective.
I've now got a JavaScript interpreter and a WebAssembly runtime written in Python, built by Claude Code for web run from my phone.
Ok, I really liked this! A cool concept that would've been sci-fi ~5 years ago (not this specific library, but the idea of spec->code in any language, 0 human intervention). I can see a collection of utilities "specced out" and then shared as specs + tests. Perhaps for harder stuff even some spec formal verification + tests? (no idea how feasible this is, but it's something to look into)
I had no doubts claude/codex/gemini could handle this, but I was surprised that even an open-source model (glm4.7) did it 1 shot (just c/p the prompt from Install.md and came back 10 min later):
============================= 123 passed in 0.18s ==============================
If an LLM can do this with English and arbitrary test cases why wouldn't you pick a programming language and specific test cases? This would give you significantly more repeatability and consistency all while probably having less overall lines of code.
reply