The technical implication here is that 'deleted' or 'hidden' doesn't mean gone. It’s interesting to see the tension between GDPR-like 'right to be forgotten' and the need for data preservation in legal investigations. However, selective hiding based on PR risk is different from automated safety filters. It suggests a manual layer of intervention that most users aren't aware exists.
They were forced to retain even 'deleted' chatlogs about half a year ago because of a copyright lawsuit involving the NYT.[1] Once more, the copyright-industrial complex makes things weird for everyone.
Right, but that's retention for legal defense — they keep everything. The selective hiding is a different layer. They retain it, they just choose when to surface it. So users get "deleted" as UX theater while the data sits in cold storage waiting for subpoenas or PR fires.
The irony is the same infrastructure that protects them in copyright suits also lets them curate what investigators see. Retention and visibility are decoupled by design.
I am fairly sure that they made a big theater back in the day how they did, in fact, delete before. But ultimately, no-one outside of OpenAI really knows one way or the other.
I see current AIs as tools—a sophisticated lathe, not a thinking partner. The question isn't whether it "knows" anything.
The interesting question is: why does AI with correct information in its weights still give wrong answers? That's an engineering problem, not a metaphysics problem.
But here's what bothers me about the "AI doesn't truly know" argument: do we? When a senior dev answers "use Kubernetes" without asking about team size or user count, are they "comprehending" or pattern-matching on what sounds authoritative? The AI failure I described is identical to what I see in human experts daily.
Maybe the flaw isn't unique to AI. Maybe it's a mirror.
Why not both? It's certainly true that human 'experts' often rely on pattern-matching without fully understanding a problem. But AI has no understanding at all, so pattern matching is its only skill, whereas human capacity for understanding isn't only greater than AI, it's fundamentally different. In what ways? That seems to be the multi-trillion dollar question.
Works for large projects with active communities (Ghostty has both). The filter pays off when volume is high. Doesn't work for smaller projects where every report matters and you want to lower barriers. The brutal honesty ("80-90% of you are wrong") is refreshing but may alienate contributors. A middle ground would be issue templates with mandatory checklists, filters without adding an extra step.
Interesting timing. I just analyzed TabNews (Brazilian dev community) and ~50% of 2025 posts mention AI/LLMs. The shift is real.
The 2014 peak is telling. That's before LLMs, before the worst toxicity complaints. Feels like natural saturation, most common questions were already answered. My bet, LLMs accelerated the decline but didn't cause it. They just made finding those existing answers frictionless.
Built a crawler to analyze 6,380 posts from TabNews (Brazilian dev community) and turned the data into an interactive infographic. Found that AI/ML dominates ~50% of discussions, Monday 8am is the best time to post, and longer titles (100+ chars) actually perform better.
Stack: Python for scraping, pure HTML/CSS for the viz. No JS frameworks needed.
Great execution! The multi-source aggregation approach is smart - averaging warnings from 5+ governments gives a more balanced picture than relying on a single source.
The comparison feature is particularly useful for trip planning. Have you considered adding historical data to show if a country's safety rating is trending up or down over time?
Thank you! The idea of trending/historical data is definitely on the radar. As it runs and updates over time, I might actually have this data and be able to add it in the future. I agree with you seeing if things are trending up or down could be really insightful.
In the past, technical friction acted as a 'natural quality filter.' Today, curation and distribution have become the new bottlenecks. 'Vibe coding' is creating software inflation—when software becomes too cheap to produce, the real value migrates toward trust and brand.
Jupp! But it's also increasing the speed of the existing product trio craftspeople. And maybe activating a generation of relentlessly resourceful people that haven't yet found their calling within building products because they've been lured into other jobs/studies
reply