Social Security and Productivity
I've been looking at social-security discussions, and more generally at old-age-expense discussions, and they seem very strange to me because I don't understand the assumptions made about productivity. The GAO assumption is
Total factor productivity growth
1.4 percent through 2015 (CBO’s August 2005 short-term assumption);
1.4 percent thereafter (long-term average from 1950-2004)
As it says, this is a projection of an existing long-term average; it's saying that if things go on as they've been going, then this is what we get: 1.4% per year, which means a doubling-time of about ln(2)/1.4% = about 50 years. And if we more than double the old-age-expense per productive worker, then we have an increasing problem. Maybe we run out of money, and the pay-as-you-go Social Security/Medicare framework falls to pieces because there isn't enough coming in to go out.
However, productivity growth is not simply a 1.4% per year long-term trend with no underlying structure; productivity growth is a process of learning how to produce goods and services with less labor. (Well, isn't it?) Any given year's productivity growth or shrinkage may be dominated by some random or cyclical blip, e.g. the economy does better, demand rises, marginally less-skillful workers are employable, so productivity goes downwards. When we're talking about long-run productivity growth, however, it seems to me that we're mainly talking about the technology of increasingly automatic production, beginning with production of desired goods and services themselves, but ending with what I would describe as
increasingly automatic production of the means of (increasingly automatic) production.The productivity growth trend here is that of Moore's law and Kurzweil's related "laws": they have doubling-times which have been in the range of one to five years, but which are now only a part of the economy. That gives us a different projection: If things go on as they've been going, then the fastest-growing parts will outgrow the rest, become dominant, and keep right on going.
Kurzweil notes reasons for thinking that at some point, perhaps in the 2030s, we reach a "singularity" in which artificial intelligences momentarily match and then surpass the humans who created them (and may become them). Okay, fine, I've actually believed that since before I was a computer science graduate student back in the late 1970s, maybe since I read Asimov's The Last Question in the early 1960s and soon afterwards played with my first "computer", a Minivac 601. Yes, to me it's fascinating, but that's not what I'm talking about here.
You can reject the whole notion of artificial intelligence if you like, but I do not see how you can reject the notion that we have increasingly automatic production of the means of (increasingly automatic) production, and that the doubling-times of this kind of production have little to do with the doubling-times of traditional productivity growth: they are intrinsically faster. Even if Moore's law bottoms out, we will still end up with programmable factories which generate new programmable factories as well as the desired goods and services (or machines to generate them). They won't be nano-factories, but they will be programmable and the only cost of running them will be the cost of providing the data they need. The radical projection is Kurzweil's, of nano-based intelligence, but I don't see how to avoid at least several doublings in productivity, built into what we already have.
What am I missing here?
I look at that CBO average: 1.4%. That long-term average, however, is not exactly trend-free: as Brad Delong puts it (in discussing wages):
First, from 1973-1995 the rate of productivity growth in the American economy was very low--roughly 1% per year... Since 2000 the rate of productivity growth has been 3.5% per year...Well, that could represent a one-time blip, a cycle, or all kinds of things. Is there a bottom line as regards old-age-expense? I look for discussion: Andrew Samwick says at Vox Baby that
I also believe that the actuaries are projecting too little productivity growth in the long term (which would improve the system's finances), but I don't think we can honestly project that we will grow our way out of this.He's referring back to a January post in which he worried about the requirements for 75-year balance:
this translates into long-term growth rates of 3.3 percent for productivity and 3.5 percent for real GDP. That productivity growth rate strikes me as too high.Why? Well, he refers to a Robert Gordon's Brookings paper on productivity, which actually quickly settles down to a discussion of the next two decades, with Exploding Productivity Growth: Context, Causes, and Implications. It presents
... several reasons to believe that productivity growth over the next two decades will be slower than the mid-2003 estimated trend of 3.05 percent a year. Most important is the role of the 2000–01 profit squeeze in motivating an unprecedented wave of cost cuts and reductions in labor input. ... ...Well, sure, that makes sense, we've had a short-term blip. But then the paper goes on to say
many of the most fruitful applications of the marriage of computer hardware, software, and communications technology have already occurred. The size of human fingers and the ability of human eyes to absorb information from tiny screens set limits to miniaturization. It seems quite likely that diminishing returns will set in, at some point over the next two decades, in the fruitful application of the innovation wave of the late 1990s.Maybe it's just me, but this seems totally off the wall -- and this is as serious as Professor Gordon seems to get about technological growth. It would have sounded a little bit better around 1980 when I first saw a calculator-watch with buttons too small for fingers -- well, it still sounds a bit like the mythical "let's close the patent office, everything that can be invented has been invented". I find it very hard to believe that Professor Gordon is serious. I'm expecting the innovation wave to keep right on building, because it seems to me to be more and more self-sustaining. Communications go between software components spread across hardware networks, and they decreasingly require human fingers to manage them or human eyes to absorb information from tiny screens. In the end, we are talking about factories which can "print" consumer goods and which can also "print" copies of themselves as sets of components and assemblers. Kurzweil (following Drexler) thinks these will be nano-scale, and he could be right, but even if they are aircraft-carrier size and print out submarines to go do the mining/fishing/farming, they will involve less and less human labor as time goes on -- until we get to a singularity-point which may or may not resemble Kurzweil's, but which will bring human labor for duplication down to zero. At that point, only creation costs.
Or then again, maybe not. But why not? I'm just projecting trends, where I think I know something about why those trends should continue for a long ways to come...but as I said when I started this blog, I'm generally wrong even about stuff that I ought to know something about.