Tuesday, February 02, 2021

Empathy Calibration

 

 February 2, and still thinking about the  Applied Rationality Training Regime sequence. In fact I expect (85% confidence?) to think about it almost every day this month and a lot of days further onwards. But my main hope at present is that I can use some of the tactics, and other things that have occurred to me or may occur to me in future, to improve my totally inadequate sense of what other people are feeling and what they want or need from me. (I'll re-reference my 2016 post On Being A Geek rather than reiterate a summary here.) I'm trying to be somewhat Bayesian here, with Sarah Som (my "Sens-O-Meter", in Mark Xu's terms) thinking about populations of possibilities. Two examples of attempted-empathy, or at least of modelling other people's choices, came up this morning. 

 First, a choice which was to be made by my daughter; I wrote down what I thought she'd choose, writing a "65%" confidence which I was really thinking of as being about 2:1 odds, and that's what she chose. Okay...good. But when I thought about it later, I realized that it wasn't that I was really thinking of 2:1 odds for the choice, but rather for my model of what she'd be feeling. Call it model M. Within M, my confidence would be about 19:1, with the 5% residue having to do with my ignorance, specifically about things that might have happened to change her schedule. Outside M, the other one-in-three probability? Call that model X, and there I had no idea -- it's just a label for failure. So in model X I guess I'd take the default prior of 50:50. 

  Now my actual confidence in the prediction would have been the sum of the two probabilities. I get 2/3 * 19/20 = 63.3 from M.  From X I get 1/3*1/2=16.7%, for a total confidence that I should have had of  about 80% overall probability for the prediction. But it's probably more relevant that M grows a bit and X shrinks a bit. If I'd given M 20 bits of probability mud and X 10 in advance, representing their 2:1 odds ratio, then M would have put 19 on the correct prediction and X only 5, so their subsequent ratio would be 19:5. So I guess I'm now 79% confident in model M, which I think anybody else might well consider obvious. Yay?

  Second, an extended-family situation. Extremely elderly relation R died; he was a good guy, he had a good life and a long one. I was surprised that person P didn't send out a family email, but thought up three possible scenarios S1,S2,S3, with the "Something Else" scenario Sx as always, because I don't really know. S1 was simply that his wife wanted to decide what to say, and was taking a few days, and P was waiting on that; S2 and S3 were more complicated and less likely, so I was saying 75%,10%,5%,with 10% for Sx -- and then a simple question ruled S2 and S3 both out. Then I was told that his wife had said she and his kids were talking about what to say. Do I view that as a match for S1? I hadn't been thinking about his kids, or likely communication delays. So, probably not; it's really Sx. And yet.... I dunno. What do I learn? I dunno. Except maybe that I should have tried to draw the situation with circles and connectors, with at least different weights on different connectors; that might have prompted Sarah to expect a model-failure. 

  I spent part of the day thinking about trial and error and genetic algorithms here, probably because it's John Holland's birthday, and a genetic algorithm ought to be a relatively less-bad way to search a space in which I know very little. Somehow, my thoughts go round and round and back again: I don't yet think it's a productive path for this problem. Maybe what I really need is a properly configured emotion chip; it's Brent Spiner's birthday too.


Labels: ,

0 Comments:

Post a Comment

<< Home