>We can argue over whether or not it's "real" empathy
There's nothing to argue about, it's unambiguously not real empathy. Empathy from a human exists in a much broader context of past and future interactions. One reason human empathy is nice is because it is often followed up with actions. Friends who care about you will help you out in material ways when you need it.
Even strangers will. Someone who sees a person stranded on the side of a road might feel for them and stop to lend a hand. ChatGPT will never do that, and not just because interaction mediums are so limited, but also because that's not the purpose of the tool. The purpose of ChatGPT is to make immense amounts of money and power for its owners, and a nice sounding chat bot currently happens to be an effective way of getting there. Sam Altman doesn't have empathy for random ChatGPT users he's never met and neither do the computer algorithms his company develops.
>There's nothing to argue about, it's unambiguously not real empathy
I think if a person can't tell the difference between empathy from a human vs empathy from a chatbot, it's a difference without a distinction
If it activates the same neural pathways, and has the same results, then I think the mind doesn't care
>One reason human empathy is nice is because it is often followed up with actions. Friends who care about you will help you out in material ways when you need it.
This is what I think people vastly overestimate
I don't think most people have such ready access to a friend who is both willing and able to perform such emotional labor, on demand, at no cost to themselves.
I think the sad truth is that empathy is a much scarcer resource than we believe, not through any moral fault of our own, but because it's just the nature of things.
The economics of emotions.
We'll see what the future has in store for the tech anyway, but if it turns out that the average person gets more empathy from a chatbot than a human, it wouldn't surprise me
Empathy does not lie in its perception on receipt but in its inception as a feeling. It is fundamentally a manifestation of the modalities enabled in shared experience. As such impossible to the extent that our experiences are not compatible with those of an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning. Understanding is possible in this relationship, but should not be confused with empathy or compassion.
I happen to agree with what you said. (Paraphrasing: A machine cannot have "real empathy" because a machine cannot "feel" in general.) But I think you're arguing a different point from the grandparent's. rurp said:
> Someone who sees a person stranded on the side of a road might feel for them and stop to lend a hand. ChatGPT will never do that [...]
Now, on the one hand that's because ChatGPT cannot "see a person" nor "stop [the car]"; it communicates only by text-in, text-out. (Although it's easy to input text describing that situation and see what text ChatGPT outputs!) GP says it's also because "the purpose of ChatGPT is to make immense amounts of money and power for its owners [, not to help others]." I took that to mean that GP was saying that even if a LLM was controlling a car and was able to see a person in trouble (or a tortoise on its back baking in the sun, or whatever), then it still would not stop to help. (Why? Because it wouldn't empathize. Why? Because it wasn't created to empathize.)
I take GP to be arguing that the LLM would not help; whereas I take you to be arguing that even if the LLM helped, it would by definition not be doing so out of empathy. Rather, it would be "helping"[1] because the numbers forced it to. I happen to agree with that position, but I think it's significantly different from GP's.
Btw, I highly recommend Geoffrey Jefferson's essay "The Mind of Mechanical Man" (1949) as a very clear exposition of the conservative position here.
[1] — One could certainly argue that the notions of "help" and "harm" likewise don't apply to non-intentional mechanistic forces. But here I'm just using the word "helping" as a kind of shorthand for "executing actions that caused better-than-previously-predicted outcomes for the stranded person," regardless of intentionality. That shorthand requires only that the reader is willing to believe in cause-and-effect for the purposes of this thread. :)
Yes, I am not in fact expanding on GPs argument but etymologically attack the premise. Pathos is not learnt. When I clutch my legs at the sight of someone getting kicked in the balls, that’s empathy. When, as now, I write about it, it’s not, even in my case where I have lived experience of it. More sophisticated kinds of empathy build on the foundations of these gut-driven ones. Thank you for reading recommendation, will look for it.
> As such impossible to the extent that our experiences are not compatible with those of an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning.
Conversely that means empathy is possible to the extent that our experiences are compatible with those of an AI. That is precisely what's under consideration here and you have not shown that it is zero.
an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning.
Will change your tune when online learning comes along?
Lived context is to me more than online learning. I admit I am not so versed in the space as to be able to anticipate the nature of context in the case of online learning, so, yes, indeed I may change my tune if it somehow makes learning more of an experience rather than an education. My understanding is it won’t. I have not proven, but argued that experience compatibility is zero, to the extent a Lim does not experience anything. Happy to accept alternative viewpoints and accordingly that someone may perceive something as a sign of empathy whether it is or not.
>If it activates the same neural pathways, and has the same results, then I think the mind doesn't care
Boiling it down to neural signals is a risky approach, imo. There are innumerable differences between these interactions. This isn't me saying interactions are inherently dangerous if artificial empathy is baked in, but equating them to real empathy is.
Understanding those differences is critical, especially in a world of both deliberately bad actors and those who will destroy lives in the pursuit of profit by normalizing replacements for human connections.
There's a book that I encourage everyone to read called Motivational Interviewing. I've read the 3rd edition and I'm currently working my way through the 4th edition to see what's changed, because it's a textbook that they basically rewrite completely with each new edition.
Motivational Interviewing is an evidence-based clinical technique for helping people move through ambivalence during the contemplation, preparation, and action stages of change under the Transtheoretical Model.
In Chapter 2 of the 3rd Edition, they define Acceptance as one of the ingredients for change, part of the "affect" of Motivational Interviewing. Ironically, people do not tend to change when they perceive themselves as unacceptable as they are. It is when they feel accepted as they are that they are able to look at themselves without feeling defensive and see ways in which they can change and grow.
Nearly all that they describe in chapter 2 is affective—it is neither sufficient nor even necessary in the clinical context that the clinician feel a deep acceptance for the client within themselves, but the client should feel deeply accepted so that they are given an environment in which they can grow. The four components of the affect of acceptance are autonomy support, absolute worth (what Carl Rogers termed "Unconditional Positive Regard"), accurate empathy, and affirmation of strengths and efforts.
Chapters 5 and 6 of the third edition define the skills of providing the affect of acceptance defined in Chapter 2—again, not as a feeling, but as a skill. It is something that can be taught, practiced, and learned. It is a common misconception to believe that unusually accepting people become therapists, but what is actually the case is that practicing the skill of accurate empathy trains the practitioner to be unusually accepting.
The chief skill of accurate empathy is that of "reflective listening", which essentially consists of interpreting what the other person has said and saying your interpretation back to them as a statement. For an unskilled listener, this might be a literal rewording of what was said, but more skilled listeners can, when appropriate, offer reflections that read between the lines. Very skilled listeners (as measured by scales like the Therapist Empathy Scale) will occasionally offer reflections that the person being listened to did not think, but will recognize within themselves once they have heard it.
In that sense, in the way that we measure empathy in settings where it is clinically relevant, I've found that AIs are very capable with some prompting of displaying the affect of accurate empathy.
A lot of human empathy isn't real either. Defaulting to the most extreme example, narcissists use love bombing to build attachment. Sales people use "relationship building" to make money. AI actually seems better than these -- it isn't building up to a rug pull (at least, not one that we know of yet).
And it's getting worse year after year, as our society gets more isolated. Look at trends in pig butchering, for instance: a lot of these are people so incredibly lonely and unhappy that they fall into the world's most obvious scam. AI is one of the few things that actually looks like it could work, so I think realistically it doesn't matter that it's not real empathy. At the same time, Sam Altman looks like the kind of guy who could be equally effective as a startup CEO or running a butchering op in Myanmar, so I hope like hell the market fragments more.
This is a good point, you can't be dependent on a chatbot in the same way you're dependent on someone you share a lease with. If people take up chatbots en masse, maybe it says more about how they perceive the risk of virtual or physical human interactions vs AI. The people I have met in the past make the most sycophant AIs seem like a drop in the bucket by comparison. When you come back from that in real life, you remark that this is all just a bunch of text in comparison.
I treat AIs dispassionately like a secretary I can give infinite amounts of work to without needing to care about them throwing their hands up. That sort of mindset is non-conducive to developing any feelings. With humans you need empathy to not burden them with excessive demands. If it solely comes down to getting work done (and not building friendships or professional relationships etc.) then that need to restrain your demands is a limitation of human biology that AIs kind of circumvent for specific workloads.
There's nothing to argue about, it's unambiguously not real empathy. Empathy from a human exists in a much broader context of past and future interactions. One reason human empathy is nice is because it is often followed up with actions. Friends who care about you will help you out in material ways when you need it.
Even strangers will. Someone who sees a person stranded on the side of a road might feel for them and stop to lend a hand. ChatGPT will never do that, and not just because interaction mediums are so limited, but also because that's not the purpose of the tool. The purpose of ChatGPT is to make immense amounts of money and power for its owners, and a nice sounding chat bot currently happens to be an effective way of getting there. Sam Altman doesn't have empathy for random ChatGPT users he's never met and neither do the computer algorithms his company develops.