Monday, February 08, 2021

Filibuster Thoughts

I've been asked for my thoughts on Ezra Klein's "Definitive Case For Ending the Filibuster" from last October: "Every argument for the filibuster, considered and debunked". It's not the sort of thing I would normally have an opinion about. Still, it's an interesting article.... I've shortened this a little, but haven't taken the time to shorten it much. I have very low confidence in this having any value at all, so it probably belongs among "Mistakes By TJM".

We start with Klein's last paragraph:
"But in the end, I trust voters more than I trust politicians. And so I prefer a system in which voters get some rough approximation of the change they vote for, and can then judge the results and choose whether to reelect the leaders they entrusted with power or throw them out of office. In American politics, perfection is too much to ask for. But some bare level of accountability is not."

 Hmm... Actually, I don't believe this is an actual representation of his preferences; this is what we've got, that he's not happy with. At the moment there seem to be 52 Senators firmly against ending the filibuster: all the Republicans plus Manchin plus Sinema. They are the people the voters voted for, and if the voters don't like it, they can remove those Senators, either in their respective 2024 Democratic primaries or in the elections to follow. The articles I'm reading suggest that, on the contrary, Manchin's and Sinema's constituents don't want increased progressivism and would be more likely to vote them out for pushing Klein's agenda.

It's a pretty dramatic agenda, and this I take as an actual representation of his preferences, right at the start:

...[1] strengthening the right to vote, [2] ...decarbonizing America, [3] ensuring no gun is sold without a background check, [4] raising the minimum wage, [5] implementing universal pre-K, [6] ending dark money in politics, [7] guaranteeing paid family leave, [8] offering statehood to Washington, DC, and Puerto Rico, [9] reinvigorating unions, passing the George Floyd Justice in Policing Act ...

If Democrats decide ... to leave the 60-vote threshold in place, that entire agenda, and far more beyond it, is dead. All those ... grand ideas on Joe Biden’s “vision” page..will be revealed as promises they never meant to keep. 

 That last bit about "promises they never meant to keep" seems obviously false. Biden has been in elective office since 1970, 3 years after Loving vs. Virginia; I think he has a different perspective on how fast progress has to happen, before you should judge it that way. Progress has been made, is being made, will be made... (Ezra Klein is 36.) And I think Biden also has to be thinking that if you get too far in front of the parade, it's not your parade any more, you lose the next election (the one coming up next year) and even the one coming up two years after that, and then it's hard to keep promises. There has been a real shift, an immense shift, in those 50 years, and one can readily understand the reactionary's credo that "Cthulhu always swims left". But I look at Scott Alexander's 2013 The Anti-Reactionary FAQ, and see his summary of US time-series some of which did go back to 1970, and then of international data:

Not only is the leftward shift less than people intuitively expect,
it does not affect all issues equally. The left’s real advantage is
limited to issues involving women and minorities. Remove these, and
opinion shifts to the left on 11 issues and to the right on 12. The
average shift is one point rightward per issue.

On the hottest, most politically relevant topics, society has moved
leftward either very slowly or not at all. Over the past generation,
it has moved to the right on gun control, the welfare state, capitalism,
labor unions, and the environment. Although the particular time series
on the chart does not reflect this, support for abortion has stabilized
and may be dropping. This corresponds well with the DW-NOMINATE data
that finds a general rightward trend in Congress over the same period.
The nation seems to be shifting leftward socially but rightward politically
– if that makes any sense.

Actually, it makes sense to me. More people are willing to admit that all h.saps are actually people, but the progress has not really been towards progressivism as such (and some feel that people who haven't been born yet just might be people anyway. It's complicated.) Progressivism, I believe, involves a trust in government wisdom which has not increased with the years. Klein, who I think can be taken as a representative of the progressive elite (among the most-read pundits in the White House, apparently) says he trusts the voters. Some of them return that trust, others, even some others who might agree on values, don't return it at all. Doubtless some voters are simply against "strengthening the right to vote" but I think there are quite a few who feel suspicious of the people who'd be implementing "same-day registration" (for example) on the grounds that advance registration somehow isn't good enough to provide the right to vote. Similarly for expanded gun checks, minimum wage, and so on. Hmmm...minimum wage....

Biden wants full employment, and Yellen says that his $1.9T stimulus bill can produce it. Meanwhile, the CBO says that a $15/hour federal minimum wage will cost 1.4 million jobs in a few years. There's no certainty in either of those, but that CBO statement doesn't sound like a recipe for re-election.

  (My own view, FWIW, is that the effect of the minimum wage "in a few years" is the wrong focus. The effect that matters is the long-run effect: as you raise labor costs, you change the incentives for businesses planning new ventures with new investments in labor-saving technology. I therefore used to oppose minimum wage increases, at least at a Federal level: it's a long-run effect and I don't expect politicians to pay attention, but it does permanently hurt the people it was trying to help. (Today's the birthday of Joseph Schumpeter: “History is a record of "effects" the vast majority of which nobody intended to produce.”) Lately I've changed my mind; I now support minimum-wage increase in spite of the long-run effect I expect on unemployment. I think about the recent military successes of "drone swarms" and it seems to me that AI is developing in unpredictable ways from unpredictable starting points.... (1) We might develop general AI starting from systems that try to figure out what humans are trying to do and use that knowledge to kill them. Or (2) we might develop general AI starting from retail systems that try to figure out what humans are trying to do and use that knowledge to give them what they want. I have a preference. A rising minimum wage, even while long-run hurting the people it's intended to help in economic terms, also provides a very slight improvement of the long-run odds for the survival of h.sap. Or so I think, beginning a couple of months ago. The data changes, so I change my opinion. But my reason for including this is just that it's complicated. I don't see Ezra Klein believing that; he sees a problem, thinks of a straightforward solution, and somehow he doesn't think of Schumpeter at all.) Back to the subject...

And Alexander goes on with the international survey:

Over time, societies tend to move from traditional and survival values
to secular-rational and self-expression values. This is the more rigorous
version of the “leftward shift” discussed above.

And that's the shift that has been Biden's lifetime, and mine. I think it's a really good shift; I'd like it to continue. A much much much more energetic government, which Klein is promoting, will help and hinder. Which effect dominates? At the moment, my money is on "hinder". I don't think that our government deserves to be trusted to handle sudden changes well, and I really worry about the future Trump, the smoother populist who gets in because Biden moved too fast, and who has no filibuster to stop him. So far I don't agree with Klein about means. I dunno about ends; that's more likely.



Tuesday, February 02, 2021

Empathy Calibration

 

 February 2, and still thinking about the  Applied Rationality Training Regime sequence. In fact I expect (85% confidence?) to think about it almost every day this month and a lot of days further onwards. But my main hope at present is that I can use some of the tactics, and other things that have occurred to me or may occur to me in future, to improve my totally inadequate sense of what other people are feeling and what they want or need from me. (I'll re-reference my 2016 post On Being A Geek rather than reiterate a summary here.) I'm trying to be somewhat Bayesian here, with Sarah Som (my "Sens-O-Meter", in Mark Xu's terms) thinking about populations of possibilities. Two examples of attempted-empathy, or at least of modelling other people's choices, came up this morning. 

 First, a choice which was to be made by my daughter; I wrote down what I thought she'd choose, writing a "65%" confidence which I was really thinking of as being about 2:1 odds, and that's what she chose. Okay...good. But when I thought about it later, I realized that it wasn't that I was really thinking of 2:1 odds for the choice, but rather for my model of what she'd be feeling. Call it model M. Within M, my confidence would be about 19:1, with the 5% residue having to do with my ignorance, specifically about things that might have happened to change her schedule. Outside M, the other one-in-three probability? Call that model X, and there I had no idea -- it's just a label for failure. So in model X I guess I'd take the default prior of 50:50. 

  Now my actual confidence in the prediction would have been the sum of the two probabilities. I get 2/3 * 19/20 = 63.3 from M.  From X I get 1/3*1/2=16.7%, for a total confidence that I should have had of  about 80% overall probability for the prediction. But it's probably more relevant that M grows a bit and X shrinks a bit. If I'd given M 20 bits of probability mud and X 10 in advance, representing their 2:1 odds ratio, then M would have put 19 on the correct prediction and X only 5, so their subsequent ratio would be 19:5. So I guess I'm now 79% confident in model M, which I think anybody else might well consider obvious. Yay?

  Second, an extended-family situation. Extremely elderly relation R died; he was a good guy, he had a good life and a long one. I was surprised that person P didn't send out a family email, but thought up three possible scenarios S1,S2,S3, with the "Something Else" scenario Sx as always, because I don't really know. S1 was simply that his wife wanted to decide what to say, and was taking a few days, and P was waiting on that; S2 and S3 were more complicated and less likely, so I was saying 75%,10%,5%,with 10% for Sx -- and then a simple question ruled S2 and S3 both out. Then I was told that his wife had said she and his kids were talking about what to say. Do I view that as a match for S1? I hadn't been thinking about his kids, or likely communication delays. So, probably not; it's really Sx. And yet.... I dunno. What do I learn? I dunno. Except maybe that I should have tried to draw the situation with circles and connectors, with at least different weights on different connectors; that might have prompted Sarah to expect a model-failure. 

  I spent part of the day thinking about trial and error and genetic algorithms here, probably because it's John Holland's birthday, and a genetic algorithm ought to be a relatively less-bad way to search a space in which I know very little. Somehow, my thoughts go round and round and back again: I don't yet think it's a productive path for this problem. Maybe what I really need is a properly configured emotion chip; it's Brent Spiner's birthday too.


Labels: ,