Ideally I want to contain and run LLM output of my domain-specific language, but it seems that I would need to fine-tune existing models. What’s the easiest online or local solution?
How to automatically generate:
a broad array of security tests;
the most efficient code;
the most readable and extensible code
- Use multi-shot prompting with something like guardrails to try prompting a commercial model until it works. [1]
- Use a local model with a final layer that steers token selection towards syntactically valid tokens [2]
[1] https://github.com/ShreyaR/guardrails
[2] "Structural Alignment: Modifying Transformers (like GPT) to Follow a JSON Schema" @ https://github.com/newhouseb/clownfish (full disclosure: this is my work)