There no reason, particularly, to believe either that it does, or it would.
For openai to scramble and try to “catch up” with a competitor and make such a massive change in strategy would require someone to be offering an equivalent service (hosted inference) that was either orders of magnitude cheaper than their offering and just as good, or significantly better than it. Or legal compulsion.
It would as open source improvements would start to exceed performance of 3.5 for specific use-cases. At the very least they would have to make it fine-tunable.
Sam Altman said recently that they are already working on making GPT-3.5/GPT-4 finetunable, they are just limited by the availability of compute (partially since none of their SFT infrastructure uses LoRa).
I had previously assumed it was safety concerns, since I don't see what stops someone from finetuning away all guard rails.