Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been looking at the code on their chat playground, https://chat.inceptionlabs.ai/, and they have a helper function `const convertOpenAIMessages = (convo) => { ... }`, which also contains `models: ['gpt-3.5-turbo']`. I also see in API response: `"openai": true`. Is it actually using OpenAI, or is it actually calling its dLLM? Does anyone know?

Also: you can turn on "Diffusion Effect" in the top-right corner, but this just seems to be an "animation gimmick" right?



The speed of the response is waaay to quick for using OpenAi as backend, it's almost instant!


I've been asking bespoke questions and the timing is >2 seconds, and slower than what I get for the same questions to ChatGPT (using gpt-4.1-mini). I am looking at their call stack and what I see: "verifyOpenAIConnection()", "generateOpenAIChatCompletion()", "getOpenAIModels()", etc. Maybe it's just so it's compatible with OpenAI API?


Check the bottom, I think it's just some off the shelf chat UI that uses OpenAI compatible API behind the scenes.


Ah got it, it looks like it's a whole bunch of things so it can also interface with ollama, and other APIs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: