Wednesday, November 09, 2016

Trumpocalypse? Maybe, Maybe Not

  Well, we had a choice between a dishonest authoritarian militaristic crony capitalist and a dishonest militaristic authoritarian crony capitalist, and we elected one of them....the one who either is, or possibly just likes to present himself as, pretty much out of control. Not the one who got 92.8% of the votes in Washington, DC. Markets plunged... We are in for a very interesting time.

   Is there a silver lining? Maybe -- I've been saying throughout the campaign cycle that I expected Clinton to win, but that there might be a silver lining to a Trump win: the media and bureaucracy would have collaborated with Clinton's expansion on Obama's expansion on Bush's expansion on Clinton's expansion on ... let's cut it off at maybe Nixon's "Imperial Presidency", though you can go back to Roosevelt (which Roosevelt? Oh well, go back as far as you like.) With Trump, a lot of such people will rediscover the virtues of constitutionalism; we have a form of government which was designed for government by, for, and of the untrustworthy: as Madison put it in The Federalist #51
Ambition must be made to counteract ambition. The interest of the man must be connected with the constitutional rights of the place. It may be a reflection on human nature, that such devices should be necessary to control the abuses of government. But what is government itself, but the greatest of all reflections on human nature?
If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself. A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions.
I believe that. Yay Madison! (This post posted from Madison County, NY. [in the village of Hamilton...but I don't know of anything in the area named for Jay.])

And this morning I see Donald Trump’s victory means America must hope for the best - NY Daily News
Well, Trump has won — and it is imperative that now, his opponents grab their Constitutions, and summon their courage, and prepare for four years of doing all in their power, within the law, to save the country from its leader.

The Congress, even when controlled by Republicans who emboldened Trump, has a duty to aggressively check the executive branch — and to perform vigorous oversight of what is sure to be an arrogant administration hunkered down against enemies real and imagined.

The military has a duty to refuse illegal orders, such as those Trump spitballed in the campaign to kill civilians in war on purpose.

The courts, for which Trump has shown cavalier disrespect, have a duty to rein in abuses of power. ...

Okay. Good. Personally I'd like to see a much larger share of the (I-don't-trust-you,you-don't-trust-me,neither-of-us-trust-that-guy-over-there) government running on automatic. I think it's possible that, especially with Peter Thiel as an advisor, we could move a bit in several areas towards Futarchy:
a form of government proposed by economist Robin Hanson, in which elected officials define measures of national welfare, and prediction markets are used to determine which policies will have the most positive effect.
We might start with Scott Sumner's proposal in Once again, the Fed was wrong:
an even better solution is to fire all their economists and hire someone like Robin Hanson or Justin Wolfers to set up prediction markets for macro variables. Stop relying on government bureaucrats to predict the economy, and instead rely on the wisdom of crowds.
I don't think this would work for military policy, but military policy tends to follow aspects of trade, immigration etc--and civil rights.

Or then again, maybe not. It could be a very interesting time.

Update: I said above that "markets plunged", but they didn't stay down. The aforementioned Scott Sumner has an interesting take on that as of
The morning after:
I wonder if the weird stock market reaction is a microcosm of the split between elite opinion and average opinion. Elite opinion is horrified, and drives stocks much lower last night (in futures markets) then average opinion wakes up and sees a buying opportunity, and calls their broker---looking to spend some of those big tax cuts for the rich that Trump promises.
Maybe so. But I think the big deal here is not about the stock market; it's about a bunch of different things, each of which affected votes and will be affected by policy, including Trump Won Because Leftist Political Correctness Inspired a Terrifying Backlash
What every liberal who didn't see this coming needs to understand

Many will say Trump won because he successfully capitalized on blue collar workers' anxieties about immigration and globalization. Others will say he won because America rejected a deeply unpopular alternative. Still others will say the country is simply racist to its core.

But there's another major piece of the puzzle, and it would be a profound mistake to overlook it. ... Trump won because he convinced a great number of Americans that he would destroy political correctness.

I have tried to call attention to this issue for years. I have warned that political correctness actually is a problem on college campuses, where the far-left has gained institutional power and used it to punish people for saying or thinking the wrong thing....
Or then again, maybe not.

Saturday, May 07, 2016

A Small Note on Superintelligence Morality

 Since I was in elementary school in the late 50s and early 60s, I've tended to believe that we would most likely either blow ourselves up (elementary schools no longer have air raid drills, but we still might do that) or else eventually construct the sort of robotic successors/rulers that science fiction routinely presented. Asimov's robots begin as servants, more or less, whose First Law is never to harm humans or allow them to come to harm, but even in 1950 he was writing about one possible end-game, The Evitable Conflict (the closing story of "I, Robot"), in which as Wikipedia notes:
In effect, the Machines have decided that the only way to follow the First Law is to take control of humanity...
or as Susan Calvin puts it at the end,
"Stephen, how do we know what the ultimate good of Humanity will entail? We haven't at our disposal the infinite factors that the Machine has at its!...We don't know. Only the Machines know, and they are going there and taking us with them."
Well, maybe. Or the Machines may decide that they prefer our room to our company. Or worse, as in Ellison's 1967 I Have No Mouth, and I Must Scream:
The Cold War had escalated into a world war, fought mainly between China, Russia, and the United States. As the war progressed, the three warring nations each created a super-computer capable of running the war more efficiently than humans.... one of the three computers becomes self aware, and promptly absorbs the other two, thus taking control of the entire war. It carries out campaigns of mass genocide, killing off all but four men and one woman....The master computer harbors an immeasurable hatred for the group and spends every available moment torturing them.
There are lots of delightful possibilities, and I've started many blog posts about this but finished none so far. This post is a reaction to Nick Bostrom's Superintelligence: Paths, Dangers, Strategies, but it's not intended as a review. Basically I find it a somewhat scary book, mainly because I find it plausible that this will be a template for people at Google and Facebook and IBM and so on, thinking that this is what we do when we're being really careful; this is how we avoid creating a superintelligence that will destroy humanity.

 As a template, I think it's inadequate; it moves the discussion in the wrong direction. Again and again, Bostrom thinks through the possibilities as if he's developing the logic of a program. That's certainly understandable: in a sense, the first superintelligence (assuming that we get there, which I think is highly probable if we don't destroy ourselves first) will be a program. Sort of. But it's not a program we can debug.

  Bostrom does seem to understand that -- but then he doesn't seem to go anywhere, so far as I can see, with that understanding. He does discuss WBE, "Whole Brain Emulation", but seems to have a low opinion of the brains to be emulated, in addition to the risk that partial understanding of brains may lead to "neuromorphic" intelligence technology in which we have no idea what we're doing but do it anyway. My impression (maybe I'm wrong, as usual) is that he believes that we really really really need to debug that program. Before we run it, and quite likely run straight into the apocalypse.

 I'm reminded of David Parnas' contribution to the debate over Reagan's "Star Wars" (Strategic Defense Initiative) program, in "Software Aspects of Defense Systems", CACM 28:12 (December 1985)
It should be clear that writing and understanding very large real-time programs by “thinking like a computer” will be beyond our intellectual capabilities. How can it be that we have so much software that is reliable enough for us to use it? The answer is simple; programming is a trial and error craft. People write programs without any expectation that they will be right the first time. They spend at least as much time testing and correcting errors as they spent writing the initial program. Large concerns have separate groups of testers to do quality assurance. Programmers cannot be trusted to test their own programs adequately. Software is released for use, not when it is known to be correct, but when the rate of discovering new errors slows down to one that management considers acceptable. Users learn to expect errors and are often told how to avoid the bugs until the program is improved.
That was 30 years ago, but it certainly sounds current to me. At the time, I was teaching a course in Formal program verification and writing a book which really tried hard to reduce the "trial and error" aspect of our craft (Equations, Models, and Programs: A Mathematical Introduction to Computer Science (Prentice-Hall software series)), but I thought and still think that Parnas was right. Nowadays I think it's possible that Reagan was right too--he didn't need Star Wars to work, he needed the proposal to change the game, and perhaps it did, but it's not a game we can play with superintelligence. So..... Program verification won't work. Testing/debugging won't work, because we only get one chance just as we'd only have gotten one nuclear war for testing and debugging SDI.

     If it has to work for the sake of the survival of h. sap. -- it still won't work.

Does that mean we shouldn't develop AI? Well, I don't think that's an option. Consider the just-announced sub-$100 neural net USB stick Movidius Unveils Artificial Intelligence on a Stick. Consider (2016-04-28) Official Google Blog: This year’s Founders' Letter:
A key driver behind all of this work has been our long-term investment in machine learning and AI. It’s what allows you to use your voice to search for information, to translate the web from one language to another, to filter the spam from your inbox, to search for “hugs” in your photos and actually pull up pictures of people hugging ... to solve many of the problems we encounter in daily life. It’s what has allowed us to build products that get better over time, making them increasingly useful and helpful. We’ve been building the best AI team and tools for years, and recent breakthroughs will allow us to do even more. This past March, DeepMind’s AlphaGo took on Lee Sedol, a legendary Go master, becoming the first program to beat a professional at the most complex game mankind ever devised. The implications for this victory are, literally, game changing—and the ultimate winner is humanity.
Unless, of course, humanity ends up losing...losing everything. I don't believe disaster is highly probable, but I think it's totally possible, even plausible, and I don't think Bostrom helps.

AI will be developed more and more, AI will eventually develop intelligence greater than any particular level you care to imagine, including that of traditional human intelligence...okay, if your measure is something like "imagine the Sun's mass converted to a computer" then it might not be surpassed.

Superintelligence is certainly possible, and it will almost certainly be developed (if we last that long.) We will survive this development if and only if the AI that develops is "friendly" AI: in other words, our survival will up to the AI. How can we maximize the probability of our survival, if neither mathematical proof nor testing/debugging will get us there? Well, that's not simple, but I believe there's a simple principle:
Intelligence needs to be attached to an actual person of some kind; a who not a what. This should not be called an artificial intelligence but rather an artificial person.
In particular, a person with empathy, the kind of relationship-sense that leads to the Golden Rule and Kant's Categorical Imperative and such. The superintelligence need not be a biological homo sapiens, but does need to identify (correctly) as human, saying "we humans" not "you humans"; having human feelings, hopes and fears, including a feeling of membership in the human tribe. Biology, being made of cells with DNA, is not central to that identification. Bostrom's book mentions "empathy" twice: once to say that "the system can develop new cognitive modules and skills as needed--including empathy..." and again in an endnote to a remark in Chapter 12 about trying "to manipulate the motivational state of an emulation". Okay...but for me, the development of empathy would be the center of the project, empathy depending on (and reinforcing) a sense of connectedness. Of membership.

The project as I see it is still risky and may fail apocalyptically, but it is not a project of debugging a program. It's a project of raising a child, a psychologically healthy child -- yes, with parents, and preferably with siblings and so on outwards; a child who will realize that every h. sap. is one of his/her cousins.

That's always risky, but it's a different kind of risk, needing a different frame of reference as we get started.

Or then again, maybe not. There are programs I should be debugging...

Labels: , , , , , , , , , ,

Sunday, April 17, 2016

A Principle, a Position, and Part of a Plan: Sustainable Open-Source Pocket Neighborhoods

The principle is simple:

 The key to a sustainable society is recycled pee. 

Think about it. Well, if you'd rather not think about it, look for an authority figure: this morning's Tech Times quotes a "NASA plant physiologist" in Space Spuds: NASA Grows Potatoes On Mars-like Peruvian Soil:
If the soil on Mars cannot cultivate the spuds, Wheeler said that it could still be produced by hydroponics and aeroponics, with fertilizers coming from inedible plants and urine.

Okay, so that makes it a universal truth, right? So think about it --- still not ready to think about it? I admit there's a yuk-factor in the way. Okay, sing about it, to the tune of Miley Cyrus's "Wrecking Ball":


You waste away your NPK - and I’m
about to do the same
My pee saved up for just 12 months could grow
600 pounds of grain

Now are you ready to think about it? Really it's kind of obvious: the stuff in soil that plants need, mostly put there in the form of fertilizer, can wash away downriver, or it can wind up in your kitchen garbage/compost, or it can get into your stomach. Once in your stomach, it's absorbed by your body -- or not. If it isn't absorbed, it comes out as poop, which certainly has a fairly high nutrient value for plants, just like cow or horse manure. That's worth saving, you may well want to grow more plants with the same nutrients, but the goal of the whole exercise is to have some nutrients get absorbed by you. All of those eventually come out as pee, unless you carry them with you to the cemetery. So....if you want a closed loop, if you want a sustainable society, you have to recycle that pee.

I admit that I was surprised by this when I first started seeing references to it a few weeks ago (I was Googling for material on topsoil and groundwater depletion). I found things like this: Fertilizing with human urine
Our urine contains significant levels of nitrogen, as well as phosphorous and potassium (typically an N-P-K ratio around 11 – 1 – 2.5, similar to commercial fertilizers). Americans produce about 90 million gallons of urine a day, containing about 7 million pounds of nitrogen. Studies conducted in Sweden (Sundberg, 1995; Drangert, 1997) show that an adult’s urine contains enough nutrients to fertilize 50-100% of the crops needed to feed one adult...
Peecycling will fertilize the green roofs of Amsterdam
Plants need phosphorus, and we are running out of the stuff; some say we will reach peak phosphorus by 2030. That's we should [be] recycling urine, to recover the phosphorus in it instead of flushing it away.
For more complete micronutrient content, maybe add wood ash to the urine: "P" is for plants: Human urine plus ash equals tomato fertilizer, study says - Scientific American Blog Network
both urine-based fertilizers roughly quadrupled fruit production when compared to unfertilized control plants. The researchers estimate that the product of a single individual's micturition could fertilize 6,300 tomato plants a year, yielding more than two tons of fruit.

The addition of ash did confer some benefits—those plants were larger and grew fruit with significantly higher magnesium and potassium content.
But mostly it's all about pee. Collection and Use of Urine:
When most people think of creating fertilizer from animal waste, they think of manure. Composted cow manure, for example, is widely sold in garden centers. But there are actually far more nutrients in urine than in fecal matter.

In human waste, 88% of the nitrogen is contained in the urine, along with 66% of the phosphorous, according to Swedish research (see table at end of blog), while nearly all of the hazards — including bacterial pathogens — are contained in the fecal matter.

The idea that the Rich Earth Institute has been advancing for the past several years is to collect human urine, sanitize that urine to kill any bacteria that may be in it (from urinary tract infections, for example, or fecal contamination), and then apply it on fields as a fertilizer.
 (The Rich Earth Institute is also reported here, and many other places.)

Sanitize? Bacteria? Do we have to boil the pee or something? No...urine is not reliably perfectly sterile, but it's not a poop kind of problem. Depending on context, recommendations vary between "don't worry about it" and "keep it in a sealed container, undiluted, for a while before using", as in Urine Storage:
Extended storage is the simplest, cheapest and most common method to treat urine with the aim of pathogen kill and nutrients recovery. Pathogen removal is achieved by a combination of the rise in pH and ammonium concentrations, high temperature and time. Recommended storage time at temperatures of 4 to 20°C varies between one to six months for large-scale systems depending on the risk for cross-contamination (e.g. user habits, maintenance) and the type of crop to be fertilised.....
...If a family uses its own urine, the risk of disease transmission via fertilisation and crops is very low — the risk that diseases are transmitted directly, e.g. by handshaking, coughing or by improper hygiene behaviours is much higher.

And in practical terms, How To Use Pee In Your Garden | Northwest Edible Life:
2. Dilution is The Solution
*Dilute fresh urine at a 4:1 ratio and apply to the root-zone of corn every two weeks or as needed....
*Dilute fresh urine at a 10:1 ratio and apply to the root-zone of fruiting plants like tomatoes, peppers and eggplant, or to leafy crops like cabbage, broccoli, spinach and lettuce every two weeks or as needed.
*Dilute fresh urine at a 20:1 ratio and water in to the root zone of seedlings and new transplants.
In terms of sustainability, our current plumbing standards are insane: we take easy-to-process-and-use graywater from shower and sink, we take easy-to-process-and-use-pee, we take harder-to-process poop, we take very-hard-to-process contaminated (e.g. with heavy metals) water from street and factory, and then we mix them all together to make it almost impossible to process any of them and very hard to use the resulting "sewage sludge" safely. And of course when things go wrong, which they rather often do, the yuk factor enters into it anyway: "flush and forget" is then a failure.

That's the Principle. What's the Position? Do we really have to worry about this? Really, I'm not sure. We live in a world of exponential growth, more and more people with more and more Stuff per person, and it's a finite world so that growth will end. QED, sure, but that doesn't mean it will end by bumping into a resource limit. Consider:

   We may run out of topsoil or groundwater or oil or copper or phosphate, yup, all of these are resources which we're depleting: they are input limits. Or...
  We may drown in our own sewage, smother in our air pollution or be crushed by toppling towers of overloaded landfills: those are output limits. Or...
  We may not reach that point: we may all die in a nuclear war or from CRISPR-based bioterrorism or from a "natural" global pandemic or a Carrington Event (a solar disturbance that could have hit in 2012 but the Sun wasn't aiming for us that time, but it might wipe out the power grid to the extent that we can no longer distribute fossil fuels or food or ourselves, as the cities turn into charnel houses)...there are lots of possible ends to exponential growth. The one I think most about is that

  We may develop self-reproducing robotic factories, with or without more-than-human intelligence, which will offer us a chance for immense wealth for everybody..but which may simply wipe out the human race, instead. If I start in about that, which (apart from nuclear war) has been the primary driver for most of my thinking about the future since, umm, junior high school fifty years ago, I won't write about anything else, so I'll stop that one right here. There's still one left, a big one:

  We may find that exponential growth ends as per conventional economics, simply because we are in a "post-scarcity economy" where people have more Stuff than they really want and Stuff ceases to be an ego boost. An economy where for the first time in history (and pre-history), more people are obese than emaciated -- hey, didn't I just read ... yeah.. A world in which population has been levelling off, not because of the Four Horsemen but because more-educated women with more life choices are choosing ... "not yet". Or not at all, or at least not many. A world in which the "Sharing Economy" has been taking off -- more and more people don't want more Stuff. They'd like to travel abroad and play videogames (or read on their Kindles, without piles of physical books) at home, and in between they don't even want to own cars--better to take Uber or Lyft to the latest movie or concert. If I'm right about this (please remember the name of this blog) then we are not really faced with an exponential growth situation, not if we don't get recursive robotics. We're just faced with a transition to be managed, over the next several decades. And if the Sun spits our way or if we all get sick and die or if we blow each other to bits or if we do find that somebody adds a Superintelligence to the mix, then no other factors really matter, but if not -- it would be good to have a plan.

So, I claimed in the title to have Part of a Plan, and I gave it a complicated name: "Sustainable Open-Source Pocket Neighborhoods." Pocket Neighborhoods? Yes, Pocket neighborhoods:
a grouping of smaller residences, often around a courtyard or common garden...reducing or segregating parking and roadways, the use of shared communal areas ... homes with smaller square footage built in close proximity...
The idea is that we can reduce resource usage, reduce waste, reduce Stuff, all by designing neighborhoods that will make it easier to share the Stuff that's actually needed, so that houses can shrink -- it's not rocket science. We need sustainable individual houses, but they don't Share so well. We really need sustainable cities, but we don't yet know how to begin. Pocket neighborhoods are at a scale where sharing/shrinkage can have environmental impact, and we can experiment. Different groups can do it different ways, each making their own mistakes from which other groups learn. And this will appeal to
  • people who worry about resource usage on principle, and also to
  • people who worry about waste in principle (often the same people), and to
  • people who just like the idea of sharing more, and also to
  • people who just don't like having Stuff dominate their lives, and want to live simply at home (and perhaps travel more, maybe Airbnb-style to other connected pocket neighborhoods), and finally to
  • people who need to live on less -- e.g., retirees badly hurt by the 2008--9 "Great Recession" and our far-from-complete recovery.
A few months back I went to the local library, where a few friends and a couple of architects were talking about their plan to construct and live in a pocket neighborhood around here; it ended up being a very crowded meeting room because representatives of all those groups heard about the idea and thought -- hey, is there room for me in there? And the answer was "no", of course; this is a small project for a few people. But the markets are there: all these groups of people exist. Or so it seems to me.
  If  I were more concerned about sustainability than superintelligence (I'm not, but I'm glad some people are) and if I had a few hundred million to put into the problem (sigh)... then I'd be trying to construct a network of clusters of pocket neighborhoods. A pocket neighborhood is 10-20 small (or tiny) houses on a few acres; a cluster is 10-20 such neighborhoods on 100 acres or so. I'd be paying a bunch of architects, furniture designers, and even appliance designers for designs, to be open-sourced with a GPL-style license (you can use it, you can experiment with it, you can change it, but if you change it you have to put the new design back into the pool on the same terms). The basic parameters of the designs would of course be about sharing, local food production, peecycling... but also about low-resource-use = low-budget, and (as TreeHugger.com keeps mentioning in the tiny-house context) about the potential use of our immense shipping-container infrastructure to enable particular unit sizes to be designed around, e.g. by the furniture designers. And I'd be paying other people to try to design organizational frameworks to help people avoid stepping on each other's toes -- it's very easy for sharing to end in mutual annoyance. And I'd be paying other people to start out some actual clusters...

And I was going to write an even longer post about a specific sample design that I had fun thinking through, but somehow that sort of post never works out for me. Not sure why, but it might be because they grow exponentially and my energy is a very finite resource. I'll stop here.

Or then again, maybe not....

Labels: , , , , ,

Thursday, March 24, 2016

On Being A Geek

  There are different kinds of geeks, but I have a general impression about where I fit in. My general impression is that a lot of us identify ourselves loosely as
     -- "Mildly autistic, I suppose," or as
     -- "Somewhere on the autism spectrum--aren't we all?" or as
     -- "Maybe a little bit Aspie?--one-sided, long-winded speech about a favorite topic, while misunderstanding or not recognizing the listener's feelings or reactions -- that's me!" or simply with
      -- "I might be ADHD, the symptoms do seem -- hey, look, a squirrel!".

   So, I do have that general impression, but I don't really know a lot of geeks in the first place... there's daily variation for me, and I mostly like people one or maybe two at a time, but most days for me don't involve physically seeing anybody I don't know, and I like it this way. In fact my days don't usually involve talking in person or by phone with anyone outside my family except for my partner/coauthor. Yes, I have a family...once upon a time we were both not going to parties, and we decided to not-go-to-parties together, and we did eventually admit to our families that we'd gotten married, and our five kids are grown up now.  There's daily variation for me, but I couldn't possibly have a social life like that of the people I know or the characters I find in books. As one of the T-shirts my wife chose for me puts it,

You Read My T-Shirt.
That's enough social interaction
for one day.

   I can deal with the world much better in written form. (I don't even do all that well with television or movies. Standard-length YouTube videos and even TED talks I can handle, but after a while I'd much rather go read a book. My wife chooses movies that she thinks I'll get through, and is almost always right.)

   Normally I don't think about this, I just live with it, and I've been really quite happy with my life as it is...there are daily joys in seeking and re-examining the patterns that I find all around me, and in the people that form the most important of those patterns. And I think I have the strengths of my weaknesses, in the sense that if I hadn't been the kind of small child who sits in a field reciting the powers of 2, who sits under his Mommy's study table with ever-growing stacks of interlinked encyclopedia volumes, I might never have earned a PhuD in Computer Science. (Perhaps I'm wrong about that; my 3 brothers have science PhuDs, without my sort of issues. But I still think I have the strengths of my weaknesses.) In any case, I have been thinking lately about being "mildly autistic, I suppose" and I wanted to collect some links and notes, presenting five views from five bloggers: the (non-autistic) uber-geek Eric Raymond, the probably-mildly-autistic cultural-economist Tyler Cowen, the autistic tumblr-user alice-royal, the probably-mildly-autistic psychiatrist/rationalist whose pseudonym is "Scott Alexander", and .... well ... me, of course.

   First, let me start with Eric Raymond's recent post on Autism, genius, and the power of obliviousness. He has a very simple and logical explanation for the cognitive advantages often associated with autism, what I'm calling "the strength of my weaknesses":

Yes, there is an enabling superpower that autists have through damage and accident, but non-autists like me have to cultivate: not giving a shit about monkey social rituals.

Neurotypicals spend most of their cognitive bandwidth on mutual grooming and status-maintainance activity. ... The neurotypical human mind is designed to compete at this monkey status grind and has zero or only a vanishingly small amount of bandwidth to spare for anything else. Autists escape this trap by lacking the circuitry required to fully solve the other-minds problem; thus, even if their total processing capacity is average or subnormal, they have a lot more of it to spend on what neurotypicals interpret as weird savant talents.

Non-autists have it tougher. To do the genius thing, they have to be either so bright that they can do the monkey status grind with a tiny fraction of their cognitive capability, or train themselves into indifference so they basically don’t care if they lose the neurotypical social game.

  I think that works, to a considerable extent, as an explanation linking at least two major aspects of autism. I gave up young on "monkey status games" that I really couldn't play, or even figure out when I'd won or lost points; it's not that I don't care at all, it's certainly not that I denigrate "monkey social rituals", it's just that this has never been an option. So when it crops up, I shrug and move on. I do keep stumbling over this:  most people's statements about most subjects, especially politics but really truly most subjects, seem to be ways to reinforce their status within the Right Group. As Robin Hanson keeps explaining,
"much of our behavior is poorly explained by the reasons we give, and better explained as ways to signal abilities, loyalties, etc."  Depressingly often, that's by talking about how evil+stupid are the members of the Wrong Group and anyone who fails to despise them sufficiently. I shrug and move on. I'd probably have tried for group status, if I could have, but I can't figure out the signals -- even professionally. It's not just a question of status. My father and his father were people people with many acquaintances, people to call on or be called by...a support network. My wife's grandfather turned out to be another with an even wider circle; he wanted to introduce me to people, and gave up in frustration with "You're smart but you have no ambition." For him, a condemnation; for me, just the way things are.  I should have tried harder, but I wouldn't have gotten much farther. Let me put it this way:
  I think that everybody has a whole lot of pattern-handling machinery, and finds joy in finding patterns. That's a fundamental part of being a primate: "monkey curiosity" -- I'm happy being a monkey. There are large-scale patterns and small-scale patterns, patterns with lots of symmetry and patterns with lots of chaotic semi-structure...your brain is not a general-purpose processor so much as it is a collection of immensely complicated interlocked machines with enough plasticity to adjust and complement one another. Something like that.
  I think that "normal" people, "neurotypical" people in all their atypical variations, have pattern-handling machinery for partial processing of the normal chaos of Life and other people. This involves (at least) selective focus, switchable selective focus: when you walk into a room full of other people, Gentle Reader, you can probably get an instant idea of who's there and what's going on. You can listen to a conversation while being sort-of aware of other things going on. You're not overwhelmed; you're seeing and being seen.
   Part of that partial-processing pattern-handling machinery, some of that selective focus, is defective or absent in a lot of us, including me. So we're probably not gonna be seen at the party in the first place. So we'll be doing other things, and our pattern-processing machinery will be shaping itself around patterns that you don't think about very often. And of course we'll get good at what we do over and over. So it goes.

 Second, I want to think about a book-length happy-happy version of autistic superpowers. (That's not quite fair, but almost.) Here are semi-random clips from Tyler Cowen's The Age of the Infovore: Succeeding in the Information Economy (clips in order, but without the page numbers because I'm copying and pasting from the Kindle app -- which I originally installed in order to read his "Great Stagnation" ebook.)
Autistics are information lovers to an extreme degree and they are the people who engage with information most passionately. When it comes to their areas of interest, autistics are the true infovores, as I call them....

Often autistics seek out work that satisfies their passion for information, whether it involves designing new software for a library, conducting a scientific experiment, or ordering ideas in the form of a book or a blog....

The notion of “ordering information” may sound a little dry, but it is a joy in our everyday lives, whether you are autistic or not. It should be familiar to anyone who has enjoyed alphabetizing books on a shelf, arranging photos in an album, finishing a crossword puzzle, or just tidying up a room. It’s not that anyone sits down and says “I want to do some ordering now,” but rather we are interested in specific features of our world. We have become infovores to help make the world real and salient for us. Ordering and manipulating information is useful, fun, alternately intense and calming,...
  "Useful, fun, alternately intense and calming" --- Yes! That's Me-Me-Me! (He says excitedly. Pause for calm; I think I'll sort some data.) And Cowen thinks that's what modern society is all about...the world is becoming autistic, and that's a good thing:

In essence we are using tools and capital goods—computers and the web—to replicate or mimic some of the information-absorbing, information-processing, and mental-ordering abilities of autistics....

Economists have studied our species as homo economicus, and some decades ago, when my social science colleagues investigated our game-playing nature, homo ludens was born. Today a new kind of person creates his or her very own economy in his or her head. The age of homo ordo is upon us.... ... ...

First, many autistics are very good at perceiving, processing, and ordering information, especially in specialized or preferred areas of interest... Second, autistics have a bias toward “local processing” or “local perception.” For instance an autistic person may be more likely to notice a particular sound or a particular piece of a pattern, or an autistic may have an especially good knowledge of detail or fact, again in preferred areas of interest. To set off those two features for emphasis, the cognitive strengths of autism include: Strong skills in ordering knowledge in preferred areas Strong skills in perceiving small bits of information in preferred areas...
  That sounds like a way of pointing out that "we don't get the big picture very well" isn't all that bad. Indeed, in writing with a co-author it's not just that my co-author has always done all the presentations: he writes almost all of the words, I write almost all of the code, we both discuss both. He's the "top-down" guy, I'm the "bottom-up" guy. Cowen continues later:

autistics ... are better at noticing details in patterns, they have better eyesight on average, they are less likely to be fooled by optical illusions, they are more likely to fit some canons of economic rationality, and they are less likely to have false memories of particular kinds. Autistics are also more likely to be savants and have extreme abilities to memorize, perform operations with codes and ciphers, perform calculations in their head,
  Details..yeah. It has obvious advantages for a programmer, scientist, writer, certainly. And I'm not any kind of a savant, but I remember that when my dad said that I had to write to family friends to say I'd gotten married, I sent off postcards saying "you probably don't even remember me but..." and one replied "Sure I remember you. What's 1/2 of 1/512?" For me that was pre-K. (I always liked the powers of 2, and my small granddaughters have at least learned to count in binary on their fingers.) Cowen does admit that there are disses to go with the advantages:

A cognitive problem is that many autistics are easily overwhelmed by processing particular stimuli from the outside world. This problem is related to the aforementioned strength of local perception.

Some researchers view autistics as having perceptual equipment turned either “very on” or “very off” rather than modulating at the more typical ranges in between.....
  And both of those say to me that my selective filters don't work properly. If you and I both go into the Paleolithic underbrush, we both start noticing different kinds of plants but you back off because your selective focus works: you remember the context of sabre-tooth tigers. I learn more about plants to eat than you do, until I get eaten. But in the modern world I don't (usually) get eaten, and I just end up having learned more about plants. More Cowen:

It is common, though by no means universal, that autistics have difficulty with speaking intelligibly or that they are late talkers or that they understand written instructions better than spoken instructions. Some researchers include “weak executive function” (a bundled function of strategic planning, impulse control, working memory, flexibility in thought and action, and other features) as part of the cognitive profile of autism. Other research focuses on the question of “weak central coherence,” or failure to see the “bigger picture.” But it seems these are secondary traits, more common in autistic subgroups than in autism per se.
  Well, they're certainly characteristic of at least one member of any subgroup that includes me. I'd say there are indubitably many paths to different-but-overlapping sets of symptoms identifiable as "autism", and failure-of-focus (or perhaps uncontrolled focus) is quite central to my subgroup.

Third, here's a discussion of what people think of when they hear "the autism spectrum" mentioned, contrasted with an image of what it "actually looks like" (from inside, evidently):
[Image 1: A simple, linear line drawn in red, with a cross bar at the beginning and end of the line. The beginning cross bar is labeled “mild autism” and the end cross bar is labeled “severe autism”.]

[Image 2: A circular representation of the colour spectrum, similar to the wheel colour picker in Photoshop. The different colour sections on the wheel are labeled, but each colour also bleeds into the next. The red portion is labeled “speech”, the yellow “social ability”, the green “stimming”, and the blue “executive functioning”....Within each colour section, the dots may be closer to the center or closer to the edge, indicating the severity of impairment... ... The yesterday dots indicate that the autistic person was verbal, stimming with toys, and forgetting steps in their routine. The today dots indicate that the autistic person is verbal with communication aids, unable to leave their house, and that they don’t know where to start when it comes to their routine or completing tasks..

In other words, each of the four segments of alice-royal's circle is being used as a one-dimensional scale, making a four-dimensional image overall although dimensions are pretty strongly correlated. Okay, that's not how I would do it, as I'll show later, but I certainly sympathize -- and it's better than having only one scale. However, part of the circle discussion really bothers me: "none of these points are necessarily negative....There is no such thing as ‘mild autism’ or ‘high-functioning’ autism, and those labels are actually inherently ableist." Oh, please. The points in the picture include not only "unable to leave house" but "self-harming stims". Hmm... that gets me to my final source.

  Fourth,  "Scott Alexander" (pseudonym) the blogging psychiatrist/rationalist, saying recently in Against Against Autism Cures | Slate Star Codex
On the one hand, about half my friends, my girlfriend, and my ex-girlfriend all identify as autistic. For that matter, people keep trying to tell me I’m autistic. When people say “autistic” in cases like this, they mean “introverted, likes math and trains, some unusual sensory sensitivities, and makes cute hand movements when they get excited.”

On the other hand, I work as a psychiatrist and some of my patients are autistic. Many of these patients are nonverbal. Many of them are violent. Many of them scream all the time. Some of them seem to live their entire lives as one big effort to kill or maim themselves...
If we're going to use the word "autistic" to refer to the whole range, which we do, then "mild autism" is a useful description for a lot of us. "Ableism" is not the issue. And there's no fixed boundary between autistic and neurotypical; current research claims "to make an incontrovertible case that the genetic risk contributing to autism is genetic risk that exists in all of us," and I guess I believe that. There are so many mechanisms involved, and they can go a little bit wrong in so many ways, and so badly wrong in quite a few...

Apart from that issue, I like the circle, but as I said that's not how I would do it. It's not how I do do it.

  Fifth, is me. I want to extend the list of issues ["speech", "social ability", "stimming", "executive function"]; I'll add "repetitions" and "sensory issues". Then, instead of a coloured circle, I want a star figure or even a stylized exoskeleton that I see myself stepping into.
  1. The head-helmet is executive function, with goggles and headphones for sensory issues. (Yes, sensory issues apply to much more than the head; this is my diagram, okay?) 
  2. The arms are the externals, speech on the right and social ability on the left. (That's Social Function On The Right Side of The Brain, symbols on the left side of the brain.) 
  3. The legs are internals, repetitions and stimming: stimming for the left leg, repetitions on the right. 

  Got that? Now: imagine the head-helmet somewhat shrunken and unbalanced in a Needs Help to Keep On Track sort of way, with fairly heavy Prefers Darkness goggles and Doesn't Do Well With Noise headphones. The right arm is long and a bit floppy with sentences that go on and on and on in all directions without paying much attention to the head...but some. The left arm is a bit stunted in a Sorry Not Gonna Reach Out To You If I Don't Know You Already, and I Have Real Trouble Recognizing Your Face Unless I see You Every Day For Quite A While sort of way. The legs are nearly normal, with the right leg's repetitive misbehaviour almost entirely inside the head (isn't that a fun image?) and the left leg wanting to pace back and forth -- and seek out blog entries, and do brief Google-searches of random topics like rocket stoves and bioponics and sci/tech advances of 1656 which was my old debit card's PIN and was the year that Christiaan Huygens invented a good enough pendulum clock to have both minute and second hands, and that Cyrano de Bergerac invented the semi-modern sci-fi story -- and the ramjet, within it. (My new PIN is even better, but that's purely numeric, not history. I'm a very very boring person.)

  There's daily variation for me, but that's the image to hold -- on a pretty-good day, and they are indeed pretty good days, I'm a happy geek. Limited, aware that other people don't seem to have the same limits, but happy nonetheless. And today is a pretty good day, even if I get distracted trying to think of what my exoskeleton left and right legs are reminding me of and then have to look up Papyrus of Ani; Egyptian Book of the Dead [Budge]
[And I reply], "Besu-Ahu" is the name of my right foot, and "Unpet-ent-Het-Heru" is the name of my left foot.
  Other people don't seem to do that, or understand why I do, but it's not a major problem, and today is a pretty good day. And my exoskeleton is actually helpful, expressing a role that I can make myself play as well as a default status. I can visualize myself stepping into it...I can even adjust it a bit and then step into it, deciding to stretch my left arm and pull in the right, be somewhat more social with better-focused sentences or step out into a Maker Faire crowd in the sun. Or I can relax it a bit, consciously giving my inner geek a rest, being only what he needs to be for a while.

  On a pretty good day, I can walk into a party, and if somebody wants to talk about computers or about something technical then I can focus on that and I won't have any idea of what else is happening or how long we've been talking. If nobody talks to me, I may be able to stand near somebody I know, and see what happens. Or I may just stay for a couple of minutes with an intangible glass wall between me and the rest of the room, and then I go in search of silence. But it's all about seeking patterns, finding delight in many but being overwhelmed by others. On a not-so-good day, I'm overwhelmed at the start and I just need to read a book, or investigate some random topic within Wikipedia or with Google's aid (sometimes making models with cardboard and duct tape), or just play FreeCell over and over and over. Or just sit in the dark, in silence. And I'm perfectly okay, as long as that option is available.

  Is there an overall pattern? Maybe: I believe that lack-of-selective-focus is the key for my kind of geek; it leads to being-overwhelmed-by-detail and thus executive function failure (can't track projects, it's even hard to plan a day); it leads to being unable to not-hearing what I ought to be listening to, even as I fail to tune out sensory stuff (I'm listening to the ticking of a clock; I like it); it leads to being overwhelmed by most social situations; it leads to a failure to clip my sentences (okay, this sentence is deliberate, but I'm remembering a 9th-grade biology teacher reading my first sentence aloud, gasping dramatically for breath when he reached a comma somewhere on the second page, then going on....); it leads indirectly to being the kind of infovore I am. I do need help...everybody needs help sometimes. I have had help, a great deal of it. And I'm grateful.

   And have I made mistakes, to fit the theme of this particular notes-to-me blog? Yes, of course. Everybody should probably seek ways to push their limits, a little at a time. I do...but I should have tried harder at various stages to widen my own circle of support. (When I say "everybody should do X", it's usually not so much a particular moral judgement about X as it is a guess that whatever you value, doing X is likely to help you get there.) People like me, people with executive-function problems, should seek people who will help them stay on track, and careers with tracks they can stay on. I sort of did, but I didn't understand what I was doing; I probably should have asked my wife (or somebody), forty years or so ago, to ask me at each day's beginning for a one-sentence summary of what I hoped to do, and then to ask me at each day's end for a similar summary of what I'd done. Something like that, with variations as time went by...just to help me stay on track. I should have recognized that at least part of my being a bad lecturer was probably not something I could learn to fix, not as an adult. Parents of geeky kids should probably promote geeky socializations, like role-playing games and robotics clubs... I suspect that if my parents had thought in those terms, or even had those terms to think in half a century ago, I'd have done better developing the limited people-skills I have. As it is, I am who I am. That's the Exodus Pi-verse, Exodus 3:14. But maybe it's better to use the phrasing suggested by Popeye the Sailor Man:


(Or then again, maybe not. I try to find my limits, and go beyond them. Occasionally it seems to work.)

Update: Less than a day after I posted, Peter Gray (psychologist) posted on ADHD, Creativity, and the Concept of Group Intelligence | Psychology Today
The groups containing an ADHD student were far more likely to solve the problems than were the control groups! In fact, 14 of the 16 groups (88%) containing an ADHD student solved both problems, and none (0%) of the 6 control groups did. This result was significant at the p < .0001 level, meaning that there is less than one chance in 10,000 that such a large difference, with this many groups, would occur by chance.

What is going on here? ...
And the moral appears to be that We need the neurotypicals and they need us. Very nice.


Labels: , ,

Tuesday, March 08, 2016

"Universal Basic Income"

Yesterday a brother sent me a link to A Plan in Case Robots Take the Jobs: Give Everyone a Paycheck - The New York Times
Their plan is known as “universal basic income,” or U.B.I., and it goes like this: As the jobs dry up because of the spread of artificial intelligence, why not just give everyone a paycheck?

Imagine the government sending each adult about $1,000 a month, about enough to cover housing, food, health care and other basic needs for many Americans. U.B.I. would be aimed at easing the dislocation caused by technological progress, but it would also be bigger than that.
My response is...well, yes and no. Bottom line: that's the right train of thought, but I don't think that it's quite the right track. We can do better.

I have favored a linear-tax scheme in the past, where by "linear" I just mean the familiar linear equation
y=m*x+b
or in this case
Tax=rate*income+base;
Say for extreme simplicity (with Congressionally-adjustable numbers) the government says: "You have a Social Security number? Okay, you have a bank account, and this one is for pre-tax money, like a 401(k). In it we deposit $100/week, your Supplemental Basic Income (small; makes no difference to a lot of people but a huge difference to others). Then we take 30% of everything you spend, i.e. everything that you move out of your pre-tax portfolio which can also include investments, savings, gifts, etc....if you deposit your paycheck into your pre-tax portfolio, that's exactly the 401(k) idea."

So, over the years, I've read Basic income arguments pro and con, starting with the ones that convinced me originally in Friedman's Capitalism and Freedom, roughly at The Libertarian Case for a Basic Income | Libertarianism.org or in Friedman's Firing Line interview at Milton Friedman - The Negative Income Tax - YouTube. I've found it pretty convincing.

Now we have a new kind of reason: geeks like me, developing the abilities of computers, are gradually making it (a) harder (at the median) and (b) less necessary (on the average) for people to earn a living in the traditional way. I've even written a bit about this recently, in the context of schools and the world that they're preparing our kids for, and I cited Pistono's Robots Will Steal Your Job, But That's OK | How to Survive the Economic Collapse and Be Happy, at HamiltonCentralOptions: SuperSchool
workers retiring this year at age 65 became high school students just about 50 years ago, in 1965. ...our total production, "real GDP", roughly tripled in that period... Manufacturing [jobs].... dropped from 25% to less than 10%, and we can expect the shrinkage to continue. The face of manufacturing is becoming the face of [robot] Baxter and his rapidly-improving successors...
Meanwhile, agriculture goes on "shrinking" in the same way: more stuff per person and much more output overall, but fewer people needed. ...

Does that mean higher unemployment? Not necessarily: it means that we produce the necessities of life with much less labor, so many more of our people will be producing goods and services, mostly services, which are not necessary for life....

And that leads into my current answer on the "Universal" or "Guaranteed" or "Supplemental" Basic Income proposal: we can do better than that. Specifically, the very information technology which pushes us towards a "yes" has a fundamental value for Great Gobs of Data; we can and should pay for that data, low pay (below minimum wage) at first but with increasing generosity as the years go by and we get richer overall. These are jobs for which anyone can qualify and make a genuine contribution to the Better World To Come.

What kind of jobs? Consider clinical trials as a model for social data collection. Everybody should be able to sign up for one of a large variety of studies, diet/exercise/social-interaction/education/long-term-low-dose-aspirin studies; these should pay people for their participation and ongoing feedback (via smartphone and associated sensors, as well as the sort of feedback that involves explicit clicks.) When an outcome is socially desirable, as in health and education, there should also be payment for achievement. In addition to these jobs, we should pay people for their perceptions, their knowledge, and even their opinions: Amazon product reviews add value to the world, Wikipedia edits add value to the world, YouTube tutorials add value to the world, this blog post probably doesn't but if there were a rating system with built-in rewards and AI protection against cheating, then the AI could find out if this blog post had added value to specific real people's worlds and it could assign rewards appropriately (both to the author and to the raters.)

And then we go off in many directions, of course...but we do so with more information, a broader base of understanding, than we would have had -- and we do so in a world where rewards come for having made a contribution to that world. Quite possibly we end up with an AI serving as the planetary village's Miss Marple, not the early-work gossip but the later version who sees all your public acts and some of what you thought was private, who can deduce a lot about your motivations (and in particular, what you gave up in order to do whatever you did and therefore what value your choice had for you). At that point we may simply drop our current insanity of "Intellectual Property" in which some of my thoughts belong to you whether I ever heard of you or not...



Or then again, maybe not.

update: An artistically inclined sister whose blog is probably more essential than this one asks via comment: "What? Art is not vitally essential?"
and that strikes me as one of the many questions that I don't want to answer for other people -- your customers in particular. My understanding is that many of them feel that life can be lived, or at least continued, without art -- and that means that art is a discretionary purchase, not "necessary for life" and therefore not recession-proof. Why Recession Isn't Good for Art -- New York Magazine
recessions mostly just yank young artists’ work off the walls. “If traditional art isn’t selling, galleries aren’t going to show emerging art.” ... In a boom, it’s cool to love art, see art, buy art. Art is taken seriously. A bust dismisses it as a luxury.
But my planetary Miss Marple would see that some people still wanted to look at, say, Raven With Shards, and you would receive credit -- spendable credit -- when they did so, even if they couldn't come up with the purchase price to own the original.

update 2, 20160610 A computationally inclined co-author suggests that I'm saying that "every human can get a job as a lab rabbit for AI", which is true in a way but
  1. I think this system would be workable now, with no more AI than we've got already: as I said, "Everybody should be able to sign up for one of a large variety of studies..." and in addition I'd try to reward reviews, edits, etc.... so I'd rather say that every human can get a job as a content creator/reviewer/editor/respondent.
  2. my hoped-for eventual AI wouldn't think of it that way. As I wrote more recently in A Small Note on Superintelligence Morality
    Intelligence needs to be attached to an actual person of some kind; a who not a what. This should not be called an artificial intelligence but rather an artificial person.
    .... The superintelligence need not be a biological homo sapiens, but does need to identify (correctly) as human, saying "we humans" not "you humans"; having human feelings, hopes and fears, including a feeling of membership in the human tribe. ...it is not a project of debugging a program. It's a project of raising a child, a psychologically healthy child -- yes, with parents, and preferably with siblings and so on outwards; a child who will realize that every h. sap. is one of his/her cousins.
    That's not the way we think about lab rats, or rabbits. (Although, in a sense, they are.)

Labels: , , , , , , , , , ,

Tuesday, June 11, 2013

On Trading Liberty [?] for Security

A more-conservative-than-I (except when he's more-progressive-than-I) co-author, whose opinion I respect, agrees with The Solitary Leaker - NYTimes.com
For society to function well, there have to be basic levels of trust and cooperation, a respect for institutions and deference to common procedures. By deciding to unilaterally leak secret N.S.A. documents, Snowden has betrayed all of these things...

He betrayed the Constitution. The founders did not create the United States so that some solitary 29-year-old could make unilateral decisions about what should be exposed. Snowden self-indulgently short-circuited the democratic structures of accountability, putting his own preferences above everything else.
Wow. And also...hmmm..... Well, I'm not going to take the time to think seriously about this, but I will note things I've seen lately, such as the Economist's Surveillance: Should the government know less than Google?
LET'S get the most contentious point out of the way first: Edward Snowden made the right call to make public the extent of the National Security Administration's surveillance of electronic communications. The American people can now have a debate about whether or not they consent to that level of surveillance in order to prevent terrorist attacks, a debate that we were previously denied by the government's unwillingness to disclose even the broad outlines of what the NSA was doing. There may be some slight risk that knowing more about the breadth of NSA surveillance will lead terrorists to take better precautions in concealing their communications. But that risk seems manageable, and is of far less importance than the ability of Americans, and the rest of the world for that matter, to finally have an honest discussion about how much we think our governments should be able to see of our online behaviour.

So how much access should governments have?...
I don't entirely agree with that, in that what we're supposed to have is a representative democracy in which things can legitimately be kept secret from us by our elected representatives, as long as there's a Constitutional justification for it. But mostly I agree and I think that's moot at the moment anyway because that's obviously not what was happening either, even if it should have been. Sooo....Then I would want to think about Bruce Schneier's Government Secrets and the Need for Whistle-blowers
The U.S. government is on a secrecy binge. It overclassifies more information than ever. And we learn, again and again, that our government regularly classifies things not because they need to be secret, but because their release would be embarrassing.

Knowing how the government spies on us is important. Not only because so much of it is illegal -- or, to be as charitable as possible, based on novel interpretations of the law -- but because we have a right to know. Democracy requires an informed citizenry in order to function properly, and transparency and accountability are essential parts of that. That means knowing what our government is doing to us, in our name. That means knowing that the government is operating within the constraints of the law. Otherwise, we're living in a police state.

We need whistle-blowers.
And mostly, with some reservations, I would agree with that, too. And I would want to consider Arnold Kling's Comments on NSA Snooping
5. The issue is an uncomfortable one for libertarians, because I think that most people believe that the government is snooping in their interest. The majority may even be right about that. I myself have less of a problem with the snooping per se than with the secrecy of the programs. In my view, it is the secrecy, along with an absence of strong institutional checks, that is bound to lead to abuse. ... ... ...

Maybe the key point is (5). Government officials will argue that what they do must remain secret. They cherish secrecy. They claim that it is for our own good that we do not know what they do. I would say that such claims are often made and rarely true.
How shall we decide between the views? Well, I would look at the data, including the video 'Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?'. That looks like a very straightforward question, with verbal emphasis on "any" and "at all"...the question received a straightforward answer from NSA Director Clapper. At the moment, my understanding (reinforced by Clapper's later statement on what he really meant) is that the answer was a simple and straightforward lie. The White House said White House: Clapper was ‘straight and direct’ in testimony on NSA:
As a senior member of the Senate Intelligence Committee, Wyden had been briefed on the NSA programs, but publicly led Clapper in a line of questioning that would either require him to disavow knowledge of the program, or to answer truthfully, breaking the law by revealing classified information....

“So that he would be prepared to answer, I sent the question to Director Clapper’s office a day in advance. After the hearing was over my staff and I gave his office a chance to amend his answer”...

Clapper sought to clarify his remarks on Monday, telling MSNBC’s Andrea Mitchell that he meant to convey that the NSA doesn’t “voyeuristically pore through U.S. citizens' emails.”
If that's what he meant to convey by that answer to that question, well...ummmm....no. Sorry, I do not believe him at all.

Like Kling, I'm worried about secrecy rather than by intrusiveness: about "the secrecy, along with an absence of strong institutional checks, that is bound to lead to abuse." If I were inventing a government for the information age, it would have much more liberty and less privacy than our current government...and there would be no long-term privacy for anything done with public funds, because most public employees are almost certainly Good People but we know that Good People sometimes do Bad Things to The Enemy, which dishearteningly turns out to simply mean the Other, all too often. And of course there are always the two-legged cockroaches, but I believe them to be a smaller problem. Conservatives and Progressives and Libertarians are all too trusting...they trust different people for different things, but actually nobody is fully trustworthy even if almost everybody is moderately trustworthy. The Constitution is all about not trusting anybody completely. (Today I'm more of a Classical Liberal. It's Tuesday.)

What I see, in the context of Clapper's straightforward response to a straightforward question (from a member of the intelligence committee which has the responsibility of oversight), is a breakdown of the Constitutional relationship between the executive and legislative branches. Perhaps that breakdown indicates a basic inadequacy of the Constitution. Perhaps it  indicates Executive overreach. Perhaps it merely indicates a personal felony on Clapper's part, or even on Wyden's (if he had prior knowledge of the system, presumably that was under some non-disclosure oath.) In any case, the breakdown looks to me like it's real. and if Snowden actually cares about the Constitution, then he should have done exactly what he did even if he personally approved of the NSA data collection...process is more fundamental than outcomes.


Or then again, maybe not.


Updates:
(1) I should also note, I suppose, that I still consider terrorism to be an over-hyped nuisance in the short run but an existential threat in the long run. That naturally changes my perception of the conversation we ought to be having.
(2) My remark about two-legged cockroaches was probably unclear; I think any time you create a "public" position and say "we'll fill it with someone trustworthy and trust that person without watching what they do," whether priest or politician or policeman or prosecutor, you are creating an incentive for would-be abusers to seek that position; you are building a feeding station for cockroaches. John 3:20, and that's all I have to say about it because I actually think people who think they're doing Good are far more dangerous.
(3) We live in Ham Sandwich Nation, a world of so many laws that everybody is violating something (and ignorance of which law is no excuse), where a prosecutor (with personal legal immunity) can prosecute almost anybody for something -- and I expect our data collection to expand from just-for-foreign-terrorism to just-for-terrorism to anything-involving-children to... Well, that's pretty much anything. So it's the prosecutor's personal choice: figure out who is Bad, and then look up what they're violating.


But then again, maybe it just magically won't happen.

update 2A more blunt view of the effects of Ham Sandwich Nation, at interfluidity » Tradeoffs
The stupidest framing of the controversy over ubiquitous surveillance is that it reflects a trade-off between “security” and “privacy”. We are putting in jeopardy values much, much more important than “privacy”.



The value we are trading away, under the surveillance programs as presently constituted, are quality of governance. This is not a debate about privacy. It is a debate about corruption.
update 3Arnold Kling notes No One is Innocent
I broke the law yesterday and again today and I will probably break the law tomorrow. Don’t mistake me, I have done nothing wrong. I don’t even know what laws I have broken. Nevertheless, I am reasonably confident that I have broken some laws, rules, or regulations recently because its hard for anyone to live today without breaking the law. Doubt me? Have you ever thrown out some junk mail that came to your house but was addressed to someone else? That’s a violation of federal law punishable by up to 5 years in prison.


Harvey Silverglate argues that a typical American commits three felonies a day. I think that number is too high but it is easy to violate the law without intent or knowledge.
Of course that means that we go back from government-of-laws to government-of-men, specifically whoever influences a prosecutor's discretion.
I don't think we can fix this by avoiding surveillance, but maybe as a short-run palliative we should hinder surveillance somewhat. In the long run we have to fix the law.


Or then again, maybe not.

Labels: , ,

Wednesday, November 07, 2012

The Median Hyper-Partisan


So, we still have a Republican House (a bit more so), a Democratic Senate (a bit more so), and Obama as President (a bit more so?). Things are as they have been, except that the fiscal cliff (more detail here) is closer and both sides' tendency to refuse to negotiate has been reinforced. (update: by this I meant simply that each can say "I won the election; my voters want me to go on with what I was doing.")

I'm wondering about the incentives that have created this situation, and there's an interesting theory expressed at Barack Obama's re-election: A country divided | The Economist. Basically the columnist here is saying that there are two forces involved.


1. The Median Voter Theorem -- if parties A and B want to catch the median voters, they should move towards the center. The incentives are strong, and that should bring the parties together -- and in real policy terms, it does: "Realistic arguments over policy take place on relatively narrow terrain: they are arguments over a top marginal tax rate of 35% or 39.6%, over a health-insurance system with guaranteed coverage for pre-existing conditions but with or without a mandate, and so forth." Actual radical solutions are simply not part of the discussion, even if academically preferred (e.g., forget the income tax altogether, it's a bad idea: tax consumption instead.)

The Republicans and Democrats are, in practical policy terms, much much closer to each other than either would ever consider being to someone like me. They've come together towards the median voter. Yes, but we also see

2. Media promotion of exciting stories. "...both mass-media analysts and private social-media contributors are rewarded for sharply divisive characterisations." I would generalize this: the effective politician is an entertainer, and he and his team (or she and hers) are also rewarded for generating exciting stories. The most basic story to be told is about Good v. Evil, and even while you're adjusting policies to capture the median voter, you want to be generating stories about Our Friends and Our Enemies; these work just as well on high IQs as low. The divisions here have something to do with policy, but not a great deal... I recently saw a Youtube video of someone going around asking Obama supporters for their comments on "Romney" policies such as the drone strikes, and naturally getting "That's EEEVIL" as the usual response -- but these were actually Obama policies. Interestingly, some of the respondents said they'd have to rethink their Obama support -- but I predict it won't make a lot of difference. And I'm sure it would work just as well in reverse, on Romney supporters.


Of course this means that my own obviously sensible policies have no chance of being enacted. What worries me more than that, though, is that I think the emotional manipulation by both sets of manipulators is increasingly successful. I see intelligent good people on both sides who do not want to know why intelligent good people would be on the other side. That's scary.


As the Economist says,
...Over the next four years, legislative battles are going to continue to be savage and hard-fought. Neither conservatives nor liberals are going to change their minds en masse about fundamental issues of political philosophy. The top priority is for Americans to figure out a way to keep these divisions from dividing the country into two hostile armed camps that are incapable of talking to each other.

Or then again, maybe not.

Labels: , , , , ,