I've been looking at the code on their chat playground, https://chat.inceptionlabs.ai/, and they have a helper function `const convertOpenAIMessages = (convo) => { ... }`, which also contains `models: ['gpt-3.5-turbo']`. I also see in API response: `"openai": true`. Is it actually using OpenAI, or is it actually calling its dLLM? Does anyone know?
Also: you can turn on "Diffusion Effect" in the top-right corner, but this just seems to be an "animation gimmick" right?
I've been asking bespoke questions and the timing is >2 seconds, and slower than what I get for the same questions to ChatGPT (using gpt-4.1-mini). I am looking at their call stack and what I see: "verifyOpenAIConnection()", "generateOpenAIChatCompletion()", "getOpenAIModels()", etc. Maybe it's just so it's compatible with OpenAI API?
Also: you can turn on "Diffusion Effect" in the top-right corner, but this just seems to be an "animation gimmick" right?