That would only be possible, if you could prevent hallucinations from ever occurring. Which you can't. Even if you supply a strict schema, the model will sometimes act outside of it - and infer the existence of "something similar".
That's not true. You say the model will sometimes act outside of the schema, but models don't act at all, they don't hallucinate by themselves, they don't produce text at all, they do all of this in conjunction with your inference engine.
The model's output is a probability for every token. Constrained output is a feature of the inference engine. With a strict schema the inference engine can ignore every token that doesn't adhere to the schema and select the top token that does adhere to the schema.