Monday, February 08, 2021

Filibuster Thoughts

I've been asked for my thoughts on Ezra Klein's "Definitive Case For Ending the Filibuster" from last October: "Every argument for the filibuster, considered and debunked". It's not the sort of thing I would normally have an opinion about. Still, it's an interesting article.... I've shortened this a little, but haven't taken the time to shorten it much. I have very low confidence in this having any value at all, so it probably belongs among "Mistakes By TJM".

We start with Klein's last paragraph:
"But in the end, I trust voters more than I trust politicians. And so I prefer a system in which voters get some rough approximation of the change they vote for, and can then judge the results and choose whether to reelect the leaders they entrusted with power or throw them out of office. In American politics, perfection is too much to ask for. But some bare level of accountability is not."

 Hmm... Actually, I don't believe this is an actual representation of his preferences; this is what we've got, that he's not happy with. At the moment there seem to be 52 Senators firmly against ending the filibuster: all the Republicans plus Manchin plus Sinema. They are the people the voters voted for, and if the voters don't like it, they can remove those Senators, either in their respective 2024 Democratic primaries or in the elections to follow. The articles I'm reading suggest that, on the contrary, Manchin's and Sinema's constituents don't want increased progressivism and would be more likely to vote them out for pushing Klein's agenda.

It's a pretty dramatic agenda, and this I take as an actual representation of his preferences, right at the start:

...[1] strengthening the right to vote, [2] ...decarbonizing America, [3] ensuring no gun is sold without a background check, [4] raising the minimum wage, [5] implementing universal pre-K, [6] ending dark money in politics, [7] guaranteeing paid family leave, [8] offering statehood to Washington, DC, and Puerto Rico, [9] reinvigorating unions, passing the George Floyd Justice in Policing Act ...

If Democrats decide ... to leave the 60-vote threshold in place, that entire agenda, and far more beyond it, is dead. All those ... grand ideas on Joe Biden’s “vision” page..will be revealed as promises they never meant to keep. 

 That last bit about "promises they never meant to keep" seems obviously false. Biden has been in elective office since 1970, 3 years after Loving vs. Virginia; I think he has a different perspective on how fast progress has to happen, before you should judge it that way. Progress has been made, is being made, will be made... (Ezra Klein is 36.) And I think Biden also has to be thinking that if you get too far in front of the parade, it's not your parade any more, you lose the next election (the one coming up next year) and even the one coming up two years after that, and then it's hard to keep promises. There has been a real shift, an immense shift, in those 50 years, and one can readily understand the reactionary's credo that "Cthulhu always swims left". But I look at Scott Alexander's 2013 The Anti-Reactionary FAQ, and see his summary of US time-series some of which did go back to 1970, and then of international data:

Not only is the leftward shift less than people intuitively expect,
it does not affect all issues equally. The left’s real advantage is
limited to issues involving women and minorities. Remove these, and
opinion shifts to the left on 11 issues and to the right on 12. The
average shift is one point rightward per issue.

On the hottest, most politically relevant topics, society has moved
leftward either very slowly or not at all. Over the past generation,
it has moved to the right on gun control, the welfare state, capitalism,
labor unions, and the environment. Although the particular time series
on the chart does not reflect this, support for abortion has stabilized
and may be dropping. This corresponds well with the DW-NOMINATE data
that finds a general rightward trend in Congress over the same period.
The nation seems to be shifting leftward socially but rightward politically
– if that makes any sense.

Actually, it makes sense to me. More people are willing to admit that all h.saps are actually people, but the progress has not really been towards progressivism as such (and some feel that people who haven't been born yet just might be people anyway. It's complicated.) Progressivism, I believe, involves a trust in government wisdom which has not increased with the years. Klein, who I think can be taken as a representative of the progressive elite (among the most-read pundits in the White House, apparently) says he trusts the voters. Some of them return that trust, others, even some others who might agree on values, don't return it at all. Doubtless some voters are simply against "strengthening the right to vote" but I think there are quite a few who feel suspicious of the people who'd be implementing "same-day registration" (for example) on the grounds that advance registration somehow isn't good enough to provide the right to vote. Similarly for expanded gun checks, minimum wage, and so on. Hmmm...minimum wage....

Biden wants full employment, and Yellen says that his $1.9T stimulus bill can produce it. Meanwhile, the CBO says that a $15/hour federal minimum wage will cost 1.4 million jobs in a few years. There's no certainty in either of those, but that CBO statement doesn't sound like a recipe for re-election.

  (My own view, FWIW, is that the effect of the minimum wage "in a few years" is the wrong focus. The effect that matters is the long-run effect: as you raise labor costs, you change the incentives for businesses planning new ventures with new investments in labor-saving technology. I therefore used to oppose minimum wage increases, at least at a Federal level: it's a long-run effect and I don't expect politicians to pay attention, but it does permanently hurt the people it was trying to help. (Today's the birthday of Joseph Schumpeter: “History is a record of "effects" the vast majority of which nobody intended to produce.”) Lately I've changed my mind; I now support minimum-wage increase in spite of the long-run effect I expect on unemployment. I think about the recent military successes of "drone swarms" and it seems to me that AI is developing in unpredictable ways from unpredictable starting points.... (1) We might develop general AI starting from systems that try to figure out what humans are trying to do and use that knowledge to kill them. Or (2) we might develop general AI starting from retail systems that try to figure out what humans are trying to do and use that knowledge to give them what they want. I have a preference. A rising minimum wage, even while long-run hurting the people it's intended to help in economic terms, also provides a very slight improvement of the long-run odds for the survival of h.sap. Or so I think, beginning a couple of months ago. The data changes, so I change my opinion. But my reason for including this is just that it's complicated. I don't see Ezra Klein believing that; he sees a problem, thinks of a straightforward solution, and somehow he doesn't think of Schumpeter at all.) Back to the subject...

And Alexander goes on with the international survey:

Over time, societies tend to move from traditional and survival values
to secular-rational and self-expression values. This is the more rigorous
version of the “leftward shift” discussed above.

And that's the shift that has been Biden's lifetime, and mine. I think it's a really good shift; I'd like it to continue. A much much much more energetic government, which Klein is promoting, will help and hinder. Which effect dominates? At the moment, my money is on "hinder". I don't think that our government deserves to be trusted to handle sudden changes well, and I really worry about the future Trump, the smoother populist who gets in because Biden moved too fast, and who has no filibuster to stop him. So far I don't agree with Klein about means. I dunno about ends; that's more likely.



Tuesday, February 02, 2021

Empathy Calibration

 

 February 2, and still thinking about the  Applied Rationality Training Regime sequence. In fact I expect (85% confidence?) to think about it almost every day this month and a lot of days further onwards. But my main hope at present is that I can use some of the tactics, and other things that have occurred to me or may occur to me in future, to improve my totally inadequate sense of what other people are feeling and what they want or need from me. (I'll re-reference my 2016 post On Being A Geek rather than reiterate a summary here.) I'm trying to be somewhat Bayesian here, with Sarah Som (my "Sens-O-Meter", in Mark Xu's terms) thinking about populations of possibilities. Two examples of attempted-empathy, or at least of modelling other people's choices, came up this morning. 

 First, a choice which was to be made by my daughter; I wrote down what I thought she'd choose, writing a "65%" confidence which I was really thinking of as being about 2:1 odds, and that's what she chose. Okay...good. But when I thought about it later, I realized that it wasn't that I was really thinking of 2:1 odds for the choice, but rather for my model of what she'd be feeling. Call it model M. Within M, my confidence would be about 19:1, with the 5% residue having to do with my ignorance, specifically about things that might have happened to change her schedule. Outside M, the other one-in-three probability? Call that model X, and there I had no idea -- it's just a label for failure. So in model X I guess I'd take the default prior of 50:50. 

  Now my actual confidence in the prediction would have been the sum of the two probabilities. I get 2/3 * 19/20 = 63.3 from M.  From X I get 1/3*1/2=16.7%, for a total confidence that I should have had of  about 80% overall probability for the prediction. But it's probably more relevant that M grows a bit and X shrinks a bit. If I'd given M 20 bits of probability mud and X 10 in advance, representing their 2:1 odds ratio, then M would have put 19 on the correct prediction and X only 5, so their subsequent ratio would be 19:5. So I guess I'm now 79% confident in model M, which I think anybody else might well consider obvious. Yay?

  Second, an extended-family situation. Extremely elderly relation R died; he was a good guy, he had a good life and a long one. I was surprised that person P didn't send out a family email, but thought up three possible scenarios S1,S2,S3, with the "Something Else" scenario Sx as always, because I don't really know. S1 was simply that his wife wanted to decide what to say, and was taking a few days, and P was waiting on that; S2 and S3 were more complicated and less likely, so I was saying 75%,10%,5%,with 10% for Sx -- and then a simple question ruled S2 and S3 both out. Then I was told that his wife had said she and his kids were talking about what to say. Do I view that as a match for S1? I hadn't been thinking about his kids, or likely communication delays. So, probably not; it's really Sx. And yet.... I dunno. What do I learn? I dunno. Except maybe that I should have tried to draw the situation with circles and connectors, with at least different weights on different connectors; that might have prompted Sarah to expect a model-failure. 

  I spent part of the day thinking about trial and error and genetic algorithms here, probably because it's John Holland's birthday, and a genetic algorithm ought to be a relatively less-bad way to search a space in which I know very little. Somehow, my thoughts go round and round and back again: I don't yet think it's a productive path for this problem. Maybe what I really need is a properly configured emotion chip; it's Brent Spiner's birthday too.


Labels: ,

Sunday, January 31, 2021

"Applied Rationality Training Regime" Overall

    < ^  | 

 January 31, the end of this subproject. (Click the "^" link above for the subproject index page, on which this page will be the last entry.) I've spent at least a little time (and one blog post) for each day of January, on the idea of "applied rationality" -- specifically on what Mark Xu describes as his take on CFAR's (the Center For Applied Rationality) take on ways to get your thinking and your actions to be a little more in line with your goals, a little less counter-productive; how to improve a little in almost whatever direction you wanted to improve in. In the end, how to improve your methods of self-improvement.

And has it helped? Yes, I think so. Some. I think doing this has 

  (a) pushed me into forming somewhat better models of what I'm doing when I try to form habits (although they're very awkward to phrase, which probably means I don't really understand them);

  (b) made it easier to form some new simple-but-useful habits;

  (c) made it easier to stop doing/saying annoying things that I wanted to stop;

  (d) made it easier to approach some problems more systematically.

  As outcomes go, that's not bad.  TAP, Noticing, Murphyjitsu+Socratic Duck; in fact I'm now starting each morning with a brief meeting inside my head, a meeting with the characters I invented as labels (at least as labels) for what I need to do for each of those. The habits I'm trying to reinforce get assigned to Checklist Charlie, my TAP-dancing Laputan flapper-spider. Warnings about what I'm about to say or do come from Marian, the Noticer. For working through problems and anticipating troubles, I talk with Spider-Duck as well as Sarah Som and Jim Pright who form my Murphyjitsu team. I'm trying to use sketches more, simply because I'm better at symbols...I might end up trying to create a spider who draws.

  Each of these is really, I think, serving as a low-bandwidth messenger (or at least message-locus, a label for a mail-drop) between my "conscious self", my central story as continually updated, and the other systems that don't communicate very well with it. You can describe it as communication between System 2 and varying parts of System 1 if you wish, but I don't frame it that way.  I'm starting from the same point as Arnold Geulincx, the 17th-century follower of Descartes who noted that his identity was his consciousness so what he wasn't conscious of, such as muscle management, wasn't, couldn't be, part of his identity. "Since, then, the movements of my body take place without my knowing how ..." it followed that "I do not cause my own bodily actions" and in the end, "I am therefore a mere spectator of this machine." I do think that's going too far: the label "I" within my story applies not only to having viewed, but to having shared some of the decisions. As Mark Xu  puts it in his followup post of TAPs for Tutoring, I may not be deciding how my central pattern generator walks, but I will remember having asked it to "do the walking thing." (Okay, he didn't really put it that way, but that's what I got out of his use of that phrase.) Of course it might be less direct than that. I know that I'm asking some part of me to "do the walking thing", and I know that a central pattern generator is involved, and I think I can claim that our understanding of CPGs is sufficient for me to say that my attempt to "walk consciously" will interfere with it, making me more clumsy rather than less. But I strongly suspect that there is at least one additional layer between my consciousness and the CPG: "I" don't talk to it directly. 

  And why am I talking this clumsy way? Well, partly because it's Geulincx' birthday, but mostly because this really is my current view of what I'm working with when I deal with the not-so-conscious parts of me. Of us. Whatever. I would like to develop a good vocabulary for this, and I'm going to try, and in the process I'll be trying to extend my messengers in what I think of as Bayesian directions. But I'm also going to try to learn some specific stuff that simply interests me; I'm going to go on with trivial form-building and try to watch myself learning IPA. (As a programmer I've worked with IPA as adapted for some endangered languages, but I didn't understand the IPA layers any more than I understood the Russian or Japanese translation layers. So I'm curious, and I didn't like being unable to pronounce "Geulincx" by Wikipedia's IPA: https://itinerarium.github.io/phoneme-synthesis/

didn't accept it, although I was able to work it out with

https://www.internationalphoneticalphabet.org/ipa-sounds/ipa-chart-with-sounds/ .) It's a small thing to learn, and worth learning. We'll see. Happy Birthday, Arnold.




Labels:

Saturday, January 30, 2021

"Applied Rationality Training Regime" Review#5

    < ^  |  >

 

 January 30, Review#5. 

   Murphyjitsu with puzzle as form-building toy problem.... a sixth-grade puzzle of filling in a simplified and relabelled periodic table. The puzzle was sent to me, with a separate solution, by a granddaughter who lives a few thousand miles away; so, for me at least, there's the issue of solving it, and there's the much more complex issue of what to say about it. From the end of the day, I see it as a conversation involving Sarah+Jim, my Murphyjitsu team, on both issues; Spider-Duck on the puzzle; Marian on what (not) to say... and my wife, to approve the result. Now I'm wishing I could do it all over again, for "form-building", but I guess what I've spent the last little while on is at least partially just that -- doing it over again in my head, but with better form, with my sub-problems and checklists better organized than they were this morning.  I think I can claim that I'm not just working it as I would have worked it before all this (model "A"), not just working out some self-improvement (model "B"), but working on improving my self-improvement (model "C"), and thus to some extent applying Douglas Engelbart's ABC Model of recursive self-improvement.  Yes, it's his birthday. And yes, I'm aware of the Yudkowsky "Insufficiently Recursive" critique:

  1. Engelbart committed the Classic Mistake of AI: underestimating how much cognitive work gets done by hidden algorithms running beneath the surface of introspection, and overestimating what you can do by fiddling with the visible control levers.
  2. Engelbart anchored on the way that someone as intelligent as Engelbart would use computers, but there was only one of him - and due to point 1 above, he couldn't use computers to make other people as smart as him.

Those both strike me as legitimate, but OTOH I think that Engelbart's work arguably did start a recursive improvement process in easier/faster creation of each next generation of software leading to a new generation of hardware leading( to opportunities for a new generation of software, and so on. Epistemic status? Well, I'm not confident, but a lot of stuff happens that isn't hidden and without Engelbart, even some of those hidden algorithms would have run quite a bit more slowly. As a graduate student in the UPenn CIS Dept in the late 70s, I think everybody I knew thought Moore's Law would peter out in a while; that came up in discussions of the need for massive parallelism (and proving theorems about massive parallelism, in my case). It didn't peter out. Why did new kinds of circuitry, not just shrunken versions of the old, keep appearing when they were needed? Partly, I think, because interactive software and the mice that helped it squeak did make engineers and even physicists effectively just a bit more intelligent than they would have been without it. I'd really like to have a better sense of how strongly I should believe that, if at all, but it's not something I'm going to work on now. Tomorrow ends the month.

Labels:

Friday, January 29, 2021

"Applied Rationality Training Regime" Review#4

    < ^  |  >


January 29, Review Day#4 --  I focus on applied rationality as per Anton Chekhov"Don't tell me the moon is shining; show me the glint of light on broken glass." (Even if it's rephrased a little.) It's his 160th birthday, and I try to make my TAP-dancing Charlie Checklist as vividly concrete as I can, with TAP-shoes and cane (the cane TAPs on items I'm supposed to remember: this morning at 4, after taking the dog out when I got up, I checked the garage temperature and didn't close the garage  door quietly. My daughter's bedroom is right overhead. Okay, I have practiced closing that door quietly, and an echo of Charlie has started to remind me for other doors as well as that one. (The "echo of Charlie" or "version of Charlie" idea is basically that I can have different Charlies who are themselves evoked as TAPs by triggers like doors, coffee pots, chairs, whatever. I can't handle many, but a few seems okay.))

I am going a bit beyond Mark Xu's ten-times rule for TAP reinforcement in that I try for spaced repetition, and this does help. To make it work better, Charlie has a pocket-watch. This doesn't work well yet. And, at least today, I haven't been asking him for help Sarah and Jim with solving actual problems via Murphyjitsu, even toy ones. Okay, tomorrow I will look at a toy problem, a puzzle from today's email, and I will try to think about "form-building".

 Today is also W.C. Fields' 140th: "Start every day off with a smile, and get it over with." This, too, is applied rationality, but Marian remarks that Fields' misanthropic humor is exactly the kind of distancing that I created her to warn me against. It has its place, I'm not gonna leave it behind entirely, but I do appreciate the warning. Maybe Fields' over-the-top version of it is really all right, some of the time: "I am free of all prejudice -- I hate everyone equally" is certainly an interesting approach to overcoming bias. But I think I'll start tomorrow with a toy problem.



 

Labels:

Thursday, January 28, 2021

"Applied Rationality Training Regime" Review#3

 

   < ^  |  >

 

January 28, Review Day #3, and this morning while walking on an icy road with my daughter I explained that the imaginary cartoon spider that was TAP-dancing on her head had just reminded me of what she'd said earlier on the walk. Through the day it happened again, a couple more times -- Checklist Charlie has actually become somewhat useful. Can I "systemize" on that basis? Maybe, at least partly. Marian also spoke up when I was about to say the wrong thing -- well, I think it would have been the wrong thing. It would be nice to believe that I can become less wrong than I was. Charlie did fail me once, and I think  it was because I was insufficiently real in my cartoon-spider-TAP-dance-visualization. I need a proper mix of fact and fancy... it's not quite enough to say with Kathleen Lonsdale (the crystallographer who flattened benzene, whose birthday it is) that "in science as in the arts, there is very little worth having that does not require the exercise of intuition as well as of intelligence, the use of imagination as well as of information." I've gotta imagine the right things, making the right use of the information.

And it's also Arthur Rubinstein's birthday: "Love life, and life will love you back." This, too, is applied rationality. 

I keep thinking that "Applied rationality" is such an awkward name. I like "selves-control" or "selves-training" better, but I can't imagine anyone else would.

I was just looking at Mental subagent implications for AI Safety:

Assume, for the sake of argument, that breaking a human agent down into distinct mental subagents is a useful and meaningful abstraction. The appearance of human agentic choice arises from the under-the-hood consensus among mental subagents. Each subagent is a product of some simple, specific past experience or human drive. Human behavior arises from the gestalt of subagent behavior. Pretend that subagents are ontologically real. ...Some agents are going to be dis-endorsed by almost every other relevant agent. I think inside you there's probably something like a 2-year-old toddler who just wants everyone to immediately do what you say.

I thought at first "A-ha! That's exactly how I think of it, it's what I've been doing throughout." But now I don't think it's quite right. It's not that your stories, your subagents, are ontologically real so much as it is that they will have been ontologically "real", as real as your overall self. They are, and you are, stories that you tell your selves. Tentatively, I see no reason to believe that more than a small fraction of them are products of simple, specific past experiences or human drives, either. And I don't think this has the same implications for AI safety, but it might. 


Labels:

Wednesday, January 27, 2021

"Applied Rationality Training Regime" Review#2

 

   < ^  |  >

  January 27, and I've been trying to go over the methods and adapt them to current issues, with some partial success (maybe). Yesterday, I thought I was going to give Marian, the Noticer of unhelpful humor,  a bunch of named sister-spiders to help in noticing some of my failure-of-social-skill bad habits, but something happened which I remember from trying to write fiction: once you create a character, they've been created and to a considerable extent they say what they want to say. 

  Marian says she'll deal with her sisters, or leave them out of it; she just wants a list of the bad habits with examples she can use for TAPs. She wants these TAPs to be managed as a checklist, and so she asks for a Systemizing spider named Checklist Charlie (I'd thought about defining him on day #10 but hadn't quite settled him in my head; I guess I was too busy being balky.) Charlie starts as an animation of an old shopping-list trick. You can associate numbers with rhyming words one-gun, two-shoe, three-tree, four-door, five-hive, six-sticks and so on, and make a little cartoon-image story for your eggs, butter, milk etc with a gun firing eggs, a shoe filled with butter, a tree whose fruit is milk cartons... whatever. Repeat the list a few times and it really does help memory. Okay, Charlie's standing, TAP-dancing, on two of his legs but he has six for memory and as a cartoon he can pull out more if needed, so number-one is his gun-claw and it holds a TAP with a trigger and the action being the first item on the checklist. That's fairly easy, even when I find that a given trigger (being in the kitchen by the coffee pot) ought to trigger several things. In that case, even though Charlie's not really recursive, he can have a baby checklist spider hanging from that claw. This actually worked this morning -- but in one case I had used the wrong trigger and didn't get triggered until too late. Working on it...

And does this really help Marian? She thinks so, because she thinks my social-skills failures are grouped together, and she's not at all clear about what I said in the  We are having a continuing discussion with Spider-Duck about how that works.




Labels: