Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Writing/reading and AI are so categorically different that the only way you could compare them is if you fundamentally misunderstand how both of them work.

And "other people in the past predicted doom about something like this and it didn't happen" is a fallacious non-argument even when the things are comparable.



The argument Socrates is making is specifically that writing isn't a substitute for thinking, but it will be used as such. People will read things "without instruction" and claim to understand those things, even if they do not. This is a trade-off of writing. And the same thing is happening with LLMs in a widespread manner throughout society: people are having ChatGPT generate essays, exams, legal briefs and filings, analyses, etc., and submitting them as their own work. And many of these people don't understand what they have generated.

Writing's invention is presented as an "elixir of memory", but it doesn't transfer memory and understanding directly - the reader must still think to understand and internalize information. Socrates renames it an "elixir of reminding", that writing only tells readers what other people have thought or said. It can facilitate understanding, but it can also enable people to take shortcuts around thinking.

I feel that this is an apt comparison, for example, for someone who has only ever vibe-coded to an experienced software engineer. The skill of reading (in Socrates's argument) is not equivalent to the skill of understanding what is read. Which is why, I presume, the GP posted it in response to a comment regarding fear of skill atrophy - they are practicing code generation but are spending less time thinking about what all of the produced code is doing.


yes, but people just really like to predict dooms and they also like to be convinced that they live in some special era in human history


It takes about 30 seconds of thinking and/or searching the Internet to realize that people also predict doom when it actually happens - e.g. with people correctly predicting that TikTok will shorten people's attention spans.

It's then quite obvious that the fact that someone, somewhere, predicts a bad thing happening has ~zero bearing on whether it actually happens, and so the claim that "someone predicted doom in the past and it didn't happen then so someone predicting doom now is also wrong" is absurd. Calling that idea "intellectually lazy" is an insult to smart-but-lazy people. This is more like intellectually incapable.

The fact that people will unironically say such a thing in the face of not only widespread personal anecdotes from well-respected figures, but scientific evidence, is depressing. Maybe people who say these things are heavy LLM users?


There is always some set of people predicting all sorts of dooms though. The saying about the broken clock comes to mind.

With the right cherry picking, it can always be said that [some set of] the doomsayers were right, or that they were wrong.

As you say, someone predicting doom has no bearing on whether it happens, so why engage in it? It's just spreading FUD and dwelling on doom. There's no expected value to the individual or to others.

Personally, I don't think "TikTok will shorten people's attention spans" qualifies as doom in and of itself.


Did you actually read what you're responding to?

> And "other people in the past predicted doom about something like this and it didn't happen" is a fallacious non-argument even when the things are comparable.

> the claim that "someone predicted doom in the past and it didn't happen then so someone predicting doom now is also wrong" is absurd

It's pretty clear that I'm not defending engaging in baseless negative speculation, but refuting the dismissal of negative speculation based purely on the trope that "people have always predicted it".

Someone who read what they were responding to would rather easily have seen that.

> As you say, someone predicting doom has no bearing on whether it happens

That is not what I said. I'm pretty sure now that you did not read my comment before responding. That's bad.

This is what I said:

> It's then quite obvious that the fact that someone, somewhere, predicts a bad thing happening has ~zero bearing on whether it actually happens, and so the claim that "someone predicted doom in the past and it didn't happen then so someone predicting doom now is also wrong" is absurd.

I'm very clearly pointing out (with "someone, somewhere") that a random person predicting a bad thing has almost no ("~zero") impact on the future. Obviously, if someone who has the ability to affect the future (e.g. a big company executive, or a state leader (past or present)) makes a prediction, they have much more power to actually affect the future.

> so why engage in it? It's just spreading FUD and dwelling on doom.

Because (rational) discussion now has the capacity to drive change.

> There's no expected value to the individual or to others.

Trivially false - else most social movements would be utterly irrelevant, because they work through the same mechanism - talking about things that should be changed as a way of driving that change.

It's also pretty obvious that there's a huge difference between "predicting doom with nothing behind it" and "describing actual bad things that are happening that have a lot of evidence behind them" - which is what is actually happening here, so all of your arguments about the former point would be irrelevant (if they were valid, which they aren't) because that's not even the topic of discussion.

I suggest reading what you're responding to before responding.

> Personally, I don't think "TikTok will shorten people's attention spans" qualifies as doom in and of itself.

You're bringing up "doom" as a way to pedantically quarrel about word definitions. It's trivial to see that that's completely irrelevant to my argument - and worth noting that you're then conceding the point about people correctly predicting that TikTok will shorten people's attention spans, hence validating the need to have discussions about it.


We are very clearly living through a moment in history that will be studied intensely for thousands of years.


Because of the collapsing empire, mind you, not because of the LLMs.


Creation of the internet, social media, everyone on the planet getting a pocket sized supercomputer, beginning of the AI boom, Trump/beginning of the end of the US, are all reasons people will study this period of time.


This is really interesting because I wholeheartedly believe the original sentiment that everyone thinks their generation is special, and that "now this time they've really screwed it all up" is quite myopic -- and that human nature and the human experience are relatively constant throughout history while the world changes around us.

But, it is really hard to escape the feeling that digital technology and AI are a huge inflection point. In some ways this couple generations might be the singularity. Trump and contemporary geopolitics in general is a footnote, a silly blip that will pale in comparison over time.


I know managers who can read code just fine, they're just not able/willing to code it. Tho the ai helps with that too. I've had a few managers dabble back into coding esp scripts and whatnot where I want them to be pulling unique data and doing one off investigations.


I read grandparent comment as saying people have been claiming that the sky is falling forever… AI will be both good for learning and development and bad. It’s always up to the individual if it benefits them or atrophies their minds.


I'm not a big fan of LLMs, but while using it for day to day tasks, I get the same feeling I had when I first started the internet (I was lucky to start with broadband internet).

That feeling was one of empowerment: I was able to satisfy my curiosity about a lot of topics.

LLMs can do the same thing and save me a lot of time. It's basically a super charged Google. For programming it's a super charged auto complete coupled with a junior researcher.

My main concern is independence. LLMs in the hands of just a bunch of unchecked corporations are extremely dangerous. I kind of trusted Google, and even that trust is eroding, and LLMs can be extremely personal. The lack of trust ranges from risk of selling data and general data leaks, to intrusive and worse, hidden ads, etc.


When I first started using the internet, I was able to instant text message (IRC) random strangers, using a fake name, and lie about my age. My teacher had us send an email to our ex-classmate who had move to Australia, and she replied the next day, I was able to download the song I just heard on the radio and play it as many times as I wanted on my winamp.

These capabilities simply didn’t exist before the Internet. Apart for the email to Australia (which was possible with a fax machine; but much more expensive), LLMs don‘t give you any new capabilities. It just provides a way for you to do what you already can (and should) do with your brain, without using your brain. It is more like using replacing your social interaction with facebook, then it is to experience an instant message group chat for the first time.


Before LLMs it was incredibly tedious or expensive or both to get legal guidance for stuff like taxes, where I live. Now I can orient myself much better before I ask an actual tax expert pointed questions, saving a lot of time and money.

The list of things they can provide is endless.

They're not a creator, they're an accelerator.

And time matters. My interests are myriad but my capacity to pass the entry bar manually is low because I can only invest so much time.


If this resembles the feeling you had when you first used the internet, it is drastically different from when I used the internet.

When I first used the internet, it was not about doing things faster, it was about doing things which were previously simply unavailable to me. A 12 year old me was never gonna fax my previous classmate who moved to Australia, but I certainly emailed her.

We are not talking about a creator nor an accelerator, we are talking about an avenue (or a road if you will). When I use the internet, I am the creator, and the internet is the road that gets me there.

When I use an LLM it is doing something I can already do, but now I can do it without using my brain. So the feeling is much closer to doomscrolling on social media where previously I could just read a book or meet my pals at the pub. Doomscrolling facebook is certainly faster then reading a book, or socializing at the pub. But it is a poor replacement for either.


I didn't have friends in other countries.

I could however greatly enrich my general knowledge in ways I couldn't do with books I had access to.


Prior to the internet I used my school library for that (or when I was very young, books at my grandparent’s house). So for me personally that wasn’t a new capability. It wasn’t until I started using Wikipedia around 2004 (when I was 17 years old) that the internet replaced (or rather complemented) libraries for that function.

But I can definitely see how for many people with less access to libraries (or worse quality libraries then what I had access to) the internet provided a new avenue for gaining knowledge which wasn’t available before.


To understand the impact on computer programming per se, I find it useful to imagine that the first computer programs I had encountered were, somehow, expressed in a rudimentary natural language. That (somewhat) divorces the consideration of AI from its specific impact on programming. Surely it would have pulled me in certain directions. Surely I would have had less direct exposure to the mechanics of things. But, it seems to me that’s a distinction of degree, not of kind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: