Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you plan to use Gemini, be warned, here are the usual Big Tech dragons:

   Please don’t enter ...confidential info or any data... you wouldn’t want a reviewer to see or Google to use ...
The full extract of the terms of usage:

   How human reviewers improve Google AI

   To help with quality and improve our products (such as the generative machine-learning models that power Gemini Apps), human reviewers (including third parties) read, annotate, and process your Gemini Apps conversations. We take steps to protect your privacy as part of this process. This includes disconnecting your conversations with Gemini Apps from your Google Account before reviewers see or annotate them. Please don’t enter confidential information in your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.


Google is the best of these. You either pay per token and there is no training on your inputs, or it’s free/a small monthly fee and there is training.


And even worse:

   Conversations that have been reviewed or annotated by human reviewers (and related data like your language, device type, location info, or feedback) are not deleted when you delete your Gemini Apps activity because they are kept separately and are not connected to your Google Account. Instead, they are retained for up to three years.
Emphasis on "retained for up to three years" even if you delete it!!


Well they can't delete a user's Gemini conversations because they don't know which user a particular conversation comes from.

This seems better, not worse, than keeping the user-conversation mapping so that the user may delete their conversations.


How does it compare to OpenAI and anthropic’s user data retention policy?


If i'm not wrong, Chatgpt states clearly that they don't use user data anymore by default.

Also, maybe some services are doing "machine learning" training with user data, but it is the first time I see recent LLM service saying that you can feed your data to human reviewers at their will.


They seem to use it as long as the chat history is enabled, similar to Gemini. https://help.openai.com/en/articles/7792795-how-do-i-turn-of...


I believe this is out of date. There’s a very explicit opt in/out slider for permitting training on conversations that doesn’t seem to affect conversation history retention.


I don't think this is the same as the AI studio and API terms. This looks like your consumer facing Gemini T&C's.


You can use a paid tier to avoid such issues. Not sure what you're expecting for those "experimental" models, which is in development and needs user feedback.


I'm assuming this is true of all experimental models? That's not true with their models if you're on a paid tier though, correct?


More of a reason for new privacy guidelines specially for big tech and AI


I mean this is pretty standard for online llms. What is Gemini doing here that openai or Anthropic aren’t already doing?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: