Robot Caregivers
Zynep Tufekci had a recent post up on medium in which she strongly argues against the assumptions underlying a NYT op ed by Louise Aronson. Aronson’s argument revolved around both the cost of elder care and the emotional burden placed upon family members who are tasked with caring for elder relatives.
Tufekci argues that this is a (historically) good time to consider the impact of the jobs lost to robotics, and to hold the line on the loss of jobs to automation, since we’re nearing, if not at, the point that job loss will outpace job creation. She also feels that, given the will, and a more relaxed immigration policy, the cost of this would be manageable, and finally that the care provided via this mechanism (aka “paid care by people”) would be more humane.
I’m not concerned with her job loss/job creation argument. I’m not sure we’re at the exact time when we should start being concerned, but in the limit, I would guess that she’s correct. Her broader point, that those disrupted by the technology shift often cannot find employment and the effect upon these particular individuals both cruel and long lasting is spot on — you don’t need to be a fan of Detroit Ruin Porn to know that many people never find jobs again.
My concern with the post is her postulate that care by people is more humane than care by robots. I find it difficult to believe that this is the case. Sure, care by someone who loves you, has the time, and the emotional and physical capacity to provide care without becoming exhausted, angry at you for being unreasonable and guilty about not being able to fulfill their obligations, both to you and to their other loved ones, would likely be more humane. Otherwise, I’m not seeing it, and I don’t think I’m being pessimistic in thinking that the relaxed, loving, cheerful care provider is not often the case.
Just to pick one example from an IEEE short survey article, one of the strong motivations for developing the technology are to reduce stress to both the care provider and the patient. If the care provider can offload some of the more stressful interactions their ability to provide enjoyable emotional interactions with the patient can be increased markedly.
It helps to distinguish between the two sides of these emotional/humane interactions: there’s the emotional connection which provides emotional support to the patient, then there’s also the emotional labor which is the work involved in providing that support.
Emotional Connection: providing emotional support
Creating an emotional connection with patients, it makes them feel better, remain engaged with their lives, and happier with their overall existence. Emotional support can be provided by people and pets — I see no a priori reason why it can’t be provided by robots. Having spent a bit of time at a conference with a Paro, I have no doubt that robots can be engaging, caring companions, despite having a limited emotional range (I came out of the experience saying that they’ve “ weaponized cuteness” — it’s that hard to defend yourself against it).
The Labor of Emotions:
The other side of the interaction is the labor required to maintain a positive emotional connection, especially when providing for an injured or aging patient. It is not always uplifting: it’s stressful because they are demanding and unreasonable; we have other demands on our time; and we can’t ignore them even when we ourselves our tired, or in need of assistance. A spouse may also have difficulty suppressing the thought “that will be me soon, and who will care for me.” It’s high (emotional) cost labor that minimizes the chance of good interactions.
Hired caregivers have many of the same constraints, but without an emotional connection to buffer them through the bad times. Not discounting the idea that many are drawn to the field do to their desire to help people, many are not, and are there for the same reasons most of us are at our jobs — add in the difficult demanding boss (patient) and it is easy to see why many of these relationships turn sour. Admittedly, more often than not emotional connection develops, but there is certainly no guarantee of it, and there’s more than enough civil and criminal legal historical precedent to caution against any assumptions that the human caregiver will be humane.
The Stress of Interactions
Additionally, some interpersonal interactions are inherently stressful for the patient:
- I don’t think any conscious adult is comfortable with not being able to clean themselves after going to the bathroom. It is stressful both for the patient and the caregiver. I attended a talk claiming that this was one of the top reasons for elderly leaving home, but haven’t been able to verify it.
- There is also significant stress to the when exhibiting a newfound disability in front of another human being — especially when it is in an area which was previously adept.
What Robots Bring to the Table
Robots, assuming that they work reasonably well, bring a lot of positive attributes to these situations. They are patient, they won’t physically fatigue and they can be made to exhibit behaviors that will be interpreted as caring and helpful. I’ll make the claim that if the patient genuinely believes that the robot is caring and helpful, it makes no difference as to what the robot actually “thinks,” — an emotional Turing test, if you will. In the same way, we don’t know what our pets actually “think” of us and probably couldn’t interpret it if we did (approximately the point of the Wittgenstein “If a lion could speak, we could not understand him” quote).
The robot, by providing relief to any emotionally attached caregivers who is involved in patient care, allow those interactions to occur at times when the caregiver is more physically and emotionally able to attend to the patient, enhancing that overall interaction.
What Robots Leave Out
For the foreseeable future, even at their best, robots will not have the shared history of emotional attachment and putative spontaneity that enhances our optimal interactions, but I think that a Paro level cuteness coupled with a fatigue free and attentive care is a tradeoff that would be very satisfactory in many, if not most, structured/for a fee caregiving situations.
That said, it’s also inevitable that some patients will not have the opportunity to have any regular contact with anyone with whom they share a substantial emotional history. Would such people be worse off if they only had robots that present caring behavior rather than paid caregivers? I don’t see the case for this: The argument is partially probabilistic: I’m assuming the robot caregivers have a relatively narrow range of “interactional quality” ranging from adequate to good. I assume people range from excellent to evil. Since any elder patient will have to be selecting from the “caregiver urn” multiple times over the course of their senescent decline, I would think that the chance of their getting someone worse than a robot is high, their chance of getting someone bad->evil is non-negligible (~ 65% if you estimate 10% of bad->evil caregivers and a requirement for 10 caregivers over the duration), given this, I find hard to accept the argument that robot care is the less humane option.
Leave a Reply