Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
stolsvik
on March 30, 2023
|
parent
|
context
|
favorite
| on:
Gpt4all: A chatbot trained on ~800k GPT-3.5-Turbo ...
As I understand these models, that makes no sense whatsoever. Is it the tokenisation that takes this crazy amount of time? Because each additional token should take exactly the same amount of time as the first.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: