Friday, January 01, 2021

"Applied Rationality Training Regime" #1: What is applied rationality?

 ^  |  >

 

It's New Year's Day for 2021, following a year not noted for the prevalence of rationality of any kind, and I don't think I've personally done nearly as well as I could have. I'd like to do better. I've been reading occasional posts on Lesswrong.com for a few years now, and I think I'd like to try Mark Xu's "Training Regime" based on a program from the Center for Applied Rationality:

My take on CFAR's take on applied rationality, compressed into 30 days. Each day has some number of exercises, few of which should require more than 15 minutes.

 So I did read Training Regime Day 0: Introduction, but I'm not going to describe anything I've been doing as a training regime; maybe I should, but not now. Instead I just go on to Training Regime Day 1: What is applied rationality? where Xu provides 3 "Takes" by example and metaphor which slide into definitions, claiming 

   (1) "applied rationality fills the gaps of science in the pursuit of truth" where science is described as a gradual social process, based on observation. AR works by making mind-changing quicker and easier, and by enabling you to decide when to apply science anyway. I'm not sure that Take 1 says very much here, so I'm probably missing something important and should come back to it later.

   (2) "applied rationality is the skill of being able to select the proper sources of information during decision-making"; 

   (3) it's a "a system of heuristics/techniques/tricks/tools that helps you increase your values, with no particular restriction on what the heuristics/techniques/tricks/tools are allowed to be."

And of course my immediate response is "maybe". I'm going to connect this to a fundamental take on consciousness/identity that I sorta believe and Xu probably doesn't share: 

you are the stories and story fragments that you tell yourself (and the mental maps they include and play out on). There's more to it than that... "you" are a focal point in those stories, etc, included recursively within them. But it's stories all the way down, from "Considerations of the Existence of Evil and What That Says About The Flying Spaghetti Monster"  to "OUCH! (hand jerks away from stove)" and "Awwww... (puppy's being cute, we all smile)". 

 Basically, I see "applied rationality" as a fancy phrase for a kind of story coherence, so I do have to say more about those stories.

 Your stories have implicit and explicit goals and values, some ephemeral and some steady-state and some periodic and some happy-ever-after ultimate. Your stories have implicit and explicit choices. Your stories have implicit and explicit knowledge feeding into those choices.

 This applies to the full range, from immediate choices like what finger-motion I "choose" to produce the next key-click, to higher-level choices like what word comes next in the sentence to help produce an idea, to higher and higher level choices.  Some of these choices are labelled as "automatic" and others are "conscious"; these have different levels in the story structure. Most of the effect of conscious choices is the way they set up automatic choices, preferably by training habits. 

 I'm not disagreeing with Jonathan Haidt here except in my emphasis on story: I visualize his elephant as a puppet pulled this way and that by a system of swarms of spiders, each with its story-thread, and I think of your consciousness as a special role but not a different or even separate kind of entity, not really an elephant-rider. It's Anansi himself, or Grandmother Spider, and you probably have more than one at any given moment but you (recursively, within your stories) are always spinning them and their stories together retroactively, and you will frequently think that you as a single conscious being made a choice consciously when in fact one of you made it automatically and just managed to hack together a plausible-to-enough-of-you explanation. Does that make sense to you? No? Me neither, but I think something like it is very likely True.

So there's some rather fundamental incoherence in the structure of your stories, and that's going to be true no matter how rational you succeed in being; there are some kinds of coherence to be expected, no matter how irrational you seem overall. 

So then what does "applied rationality" mean to me at this point? At this point, I'd say that rationality is a label I'd apply to some conscious choices. If you're not making a conscious choice, then you may be doing exactly the right thing, you may have reached your current process through rationality, but neither of those means that you're being rational now. The Taoist sage of Raymond Smullyan's The Tao is Silent (he also wrote First-Order Logic which was a graduate-school text for me, and other kinds of books), is described thus:

The Sage falls asleep not
Because he ought to
Nor even because he wants to
But because he is sleepy.

He's not being rational or irrational; he's being a Sage, and that's "perfectly all right with me." ("Whichever way the wind blows, whichever way the world goes"...another poem from the same book.) So... shall I stop beating around the bush and just say it? 

  Any time you decide that your conscious choices made you less likely to achieve your goals than alternate choices would have, and that you could have known that in advance, you can say you were being irrational. This is a statement about your internal story and how its pieces fit together. If you manage to redirect that insight to the future, to your own internal science fiction, if you manage to be the person whose choices are improved by that insight, then you've developed "applied rationality." Applied rationality is (in this context) redirected literary criticism.

 For me, applied rationality is not an ultimate goal but it's a subgoal for so many other goals that it might as well be one. Still, it's worth remembering not only Smullyan's Sage but the peculiar twist on his story that's told as a quote from T.H. Huxley, "Darwin's Bulldog", in at least one book:

"I protest, that if some great power would agree to make me always think what is true and do what is right, on condition of being turned into a sort of clock that is wound up every morning, I should close with the offer."
             T.H. Huxley (quoted by a Professor Drummond)

 Huxley's ideal and Smullyan's are both choice-free, but I'm not sure they're similar. In the end they may be just the same, or exact opposites. Still, they are both somewhere along lines pointed to by Alfred North Whitehead:

It is a profoundly erroneous truism, repeated by all copy-books and
by eminent people when they are making speeches, that we should cultivate
the habit of thinking of what we are doing. The precise opposite is
the case. Civilization advances by extending the number of important
operations which we can perform without thinking about them. Operations
of thought are like cavalry charges in a battle — they are strictly
limited in number, they require fresh horses, and must only be made
at decisive moments.

 At any given moment, you can think of your rationality as a resource. One which can, perhaps, be augmented by a training regime. We'll see; I don't expect to be as long-winded as this every day, but I'll try to do something for each day, until instructed otherwise by the application of such rationality as I have.

(Notes, cut out from the post as I wrote it)

Jan 1 is the birthday of Sir James Frazer, who wrote the Golden Bough and talked about man's progress from irrational primitive superstition to religion to science; also the birthday of Satyendra Bose, for whom bosons, e.g. photons, are named, because he was the quantum mechanic who showed that if coins were photons and we each flipped a pair of them, then instead of 4 possibilities HH, HT, TH, TT  with probability 25% each and therefore half of us see one head and one tail, there are only 3 possibilities each with probability 33.3...%: HT and TH are the same because photons don't have distinct identities the way our intuitions tell us they do. So that, some of us might say, completes a circle from Frazer's straight line. 

Jan 1 is also the birthday of Michael Owens, whose first patent application was clearly part of Whitehead's advance of civilization:

Heretofore in the art of blowing glass, there has been a blower necessary...

 There's something glorious about that sentence, and something ominous too. But Owens was, I believe, a practitioner of applied rationality in the advance of civilization. And I think I'll stop there... Happy Birthday to Frazer, Owens, and Bose.

Labels:

0 Comments:

Post a Comment

<< Home