If you're syntax checking every token, you're doing it AFTER the llm has spat out its output. You didn't actually do anything to force the llm to produce correct code. You just reject invalid output after the fact.
If you could force it to emit syntactically correct code, you wouldn't need to perform a separate manual syntax check afterwards.
how do you disallow it from generating specific things? My point is that you can't. And again, how do you stop it generating certain tokens, but only in certain contexts?
You would need to somehow analyze the prompt, figure out that the user is asking for an addition of two numbers, and selectively enable that filter. If that filter was left enabled permanently then you'd just functionally have a calculator.
But the analysis of the prompt itself is not a task that can be reliably automated either, for the exact same reasons the original model couldn't consistently do addition properly.
So your solution has the exact same problem as the original. If you ask for an addition, you can't be sure that you will get numbers (you can't be sure the filter will always be enabled when needed). You just shifted the problem out to a separate thing to be "left as an exercise to the reader" and declared the problem trivial.
If you're syntax checking every token, you're doing it AFTER the llm has spat out its output. You didn't actually do anything to force the llm to produce correct code. You just reject invalid output after the fact.
If you could force it to emit syntactically correct code, you wouldn't need to perform a separate manual syntax check afterwards.