Ignorance of Blisks
(On the Requirement of Manual Production for Aerospace Engineering Tools) In June, I spent some hours with my rocket-scientist little brother, and we talked about my own near-Singularitarianism. I may well have over-emphasized my liking for the idea of 3D printing, which I found esthetically appealing in the late 80s--still do, and now I do expect it to get to be important. Then in July in the mail I found a clipping from "Aviation Week and Space Technology" about the difficulties of manufacturing the machines that manufacture "blisks". There was a note attached; the note said that
this article reminds me of why I'm skeptical of "printable machines" in the next 10-20 years.Umm..well, okay, Roger. I'm not even sure I disagree with you, in the sense that I'm not sure we'll see printed blisk-making machines within 20 years. (We might, or we might instead see "printed" blisks, or neither, and Hofstadter's Law applies as to the production of this note.) But when I'm not sure, I get even more verbose than when I am sure, and that's what I propose to do now -- and in any case, I am pretty sure that the article is wrong, whether or not your conclusion from it is correct. I was totally ignorant of blisks. My conclusions, introductions, and everything in betwen are therefore based on bliskful ignorance. Read at your own risk.
Oh, yeah? The magazine sets up the problem in paragraph 5: blisk-making needs
a machine tool whose cutting head can follow the digitally encoded design to within about 12 microns.Okay, 12 microns is pretty small in today's world of practical machinery. However, I think the crucial statement in your magazine is in paragraph 6: this tool "must be built by hand" because, he claims,
No machine can make another as precise as itself.If the author (Bradley Perrett) says that's true of the relevant machines at this moment, I have no reason to doubt him. If he says it as a statement of principle, or even as something that will reliably hold for any particular machines over the next 10-20 years (your chosen timeframe), I don't believe it at all. In general, I would say that we have machines of high precision because they have been built, with much difficulty, by machines of lesser precision -- under direction which has been human but which could be robotic. As a special case, I believe that it's perfectly possible for a machine to be built which builds a copy of itself, to exactly the same tolerances. Indeed, my sense of where the future is going, as well as my sense of the past few billion years of evolution, is based on that belief. So...
Replication and Precision: principle Some (especially analog) methods of replication will always diminish precision (which can be recovered in some cases by tactics such as grinding and polishing), whereas other (digital, including DNA) methods may or may not diminish precision (which can be recovered in some cases by error-correcting codes or simply by selection, as long as we can generate enough variants that some of them are okay, and the rest can be rejected.) In essence, I think Perrett is confusing the standard-meter problem:
- if you define the meter as a metal stick and try to make analog copies (even with a digitally-controlled tool) then you will lose some precision with each generation of copies. This has been going on for a while; a friend who's an archaeologist at Colgate (Rebecca Ammerman) has written about ancient terracotta goddess statues made from molds made from statues made from molds...(and about tracking the sequence via imperfections and size change.) I believe that's what Perrett is getting at with his "No machine can..." claim.
- If, however, you define the meter as a multiple of some reproducible wavelength, say a carbon-dioxide laser's 9.6 microns which I pick because it's close to the stated 12 microns, or a TEA laser's 337.1 nm, then the problem of measurement-reproducibility becomes solvable...not at all easy, but solvable.
...actual laboratory realisations of the metre are still delineated by counting the required number of wavelengths of light along the distance.And as you probably know, commercial laser interferometrics can provide pretty fair accuracy in a fairly large space:
With regard to the lineal control of the laser interferometer, the resolution is 0.16 microns; the repeatability is 1 micron +1 micron/meter; and the accuracy for radial distance is 10 microns +0.8 microns/meter.I don't claim that this magically makes it easy to make blisks or blisk-makers, but the point of interferometry is that you're using precision that comes from the light-waves themselves -- and the effective precision can and does improve without any magical manual touch. I think of the improvements as a Moore's Law instance, but it may be that Moore's Law thinking is irrelevant here; after all, in 1991 we already had, in principle, a Laser interferometric system such that
This method achieved (i) sub-nanometer resolution (0.6 nm/LSB), (ii) high stability (2.5 nm/day), (iii) high linearity (less than 1 LSB), and (iv) high following speed (more than 1000mm s-1).Well, maybe. I expect that some future CNC systems, including but not limited to 3D printers, will use a well-defined 3D grid which will, if that's desirable, be based on counting off wavelengths one way or another. And later, maybe we'll be counting off specific (crystallized?) molecules. And later, maybe we'll be counting off spaces in a graphene grid, six carbon atoms around each hexagonal tile. And then maybe we'll have to stop, but 12 microns will not seem small. Twelve microns will seem, will be, huge.
And so? Somehow, Roger, I doubt that you'll consider that to be an adequate response. Maybe it isn't, but anyway you won't think so. Hmm...
Outline: I want to bloviate on our respective professional-geek biases (why you won't take this seriously), and then on my most fundamental caveats (why you shouldn't take this seriously), and then on what I am "predicting", to the extent that I'm predicting anything, and why. You can then decide whether you actually disagree, and whether any comment you might make could possibly lighten my hopeless ignorance.
Biases: In general, we judge and misjudge by supposing that the future will look like X, X being something we think we've learned from in the past, yes? Think back long before our respective PhuDs to the time of the moon landing, with us both in Colegio Nueva Granada -- high school for me, first grade (second?) for you. (Okay, maybe you don't remember it that clearly, but a high-school classmate named Al Borrero got in touch recently, I guess we'd both gotten in touch with "legendary guitarist" and anthro prof Hector Qirko who'd just been interviewed by instapundit, and Al remembers you and Magi as "very very bright." You were there, you were aware.) At that time, Moore's Law was four years old: two doublings since its proposal. It has gone on pretty well since, with some wobbles as to what was doubling. If your tools had improved the way mine have improved, I suppose we'd all be commuting to the moon, and we certainly wouldn't have to wait ten years to see if your itty-bitty rockets work when they get to Pluto. Your tools have not improved that much; in some respects they haven't improved fundamentally at all. It's natural for my model of improvement to be more dramatic than yours.
My well-grounded lack of confidence: That doesn't mean that Moore's Law is (or is not) now an appropriate model for your subject or mine. Back when I ended my PhuD work in 1980, I remember I believed predictions that we were nearing physical limits that would stop it cold. After that, we would depend on massive parallelism (remember the Connection Machine?) and so I was one of the throng doing proofs about parallel programs, functional programming, algebras of parallel-reducible expressions and automatic parallel scheduling of the evaluation of recursively-defined arrays...but Moore's Law kept on going, sequential machines were good enough to support the "desktop revolution" (say 1975 to 1990, though it continues) and then the "connectivity revolution" (say 1990 to 2005, though it continues). Anyway, Moore's Law kept on going: I wuz wrong. I still expect the original circuit-size Moore's Law (and the directly-associated variations, e.g. disk drive capacity per square inch) to hit a limit, and I still expect growth of effective computer capacity to continue for a while via massive parallelism, and I still even expect functional languages (e.g., within Erlang?) to play a larger role, but I don't have much of a track record for any predictions in any direction.
My basic "prediction": What I think I'm starting to see now, something I haven't seen before, is a robotics revolution to follow the connectivity revolution of WWW and cell phones. A Moore's Law progression beginning in robotics, 3D printing, and associated (CNC, mostly) technologies. Back in the 80s, I thought 3D printers might get real someday; now I mostly think that today's toys of that general category will be twice as good in a couple of years, and will similarly go on doubling their overall goodness (which I have no intention of trying to define) at roughly Moore's Law rates for the next decade or three.
Am I sure? Nah. What I see now might not continue. On the other hand, we might see growth (or size/cost shrinkage) a whole lot faster than Moore's Law, because lots of technologies for working at very small scales have already been developed.
Why am I talking this way? Basically, I look at reports of progress, of robotics in medical and military and pure-research and just for fun, of 3D printers and "fab labs" mostly in the last two categories, and the progress of productivity in random places: a robotics summary last year claimed that
I find that plausible; I expect it to continue; if it does continue, I expect to see the production and manipulation of objects revolutionized in much the same way that we've seen in the production and manipulation of data. I don't see why blisks should be an exception.Prices of industrial robots, expressed in constant 1990 US dollars, have fallen from an index 100 to 54 in the period 1990-2005, without taking into account that robots installed in 2005 had a much higher performance than those installed in 1990. When taking into account quality changes, it was estimated that the index would have fallen to 22.In the same period (1990-2005), the index of labour compensation in the American business sector increased from 100 to 179. This implies that the relative prices of robots fell from 100 in 1990 to 23 in 2005 without quality adjustment, and to 10 when taking into account quality improvements in robots.
Of course, if robotic production of blisks (and of other things) does well enough, there will be correspondingly less incentive to speed the improvement of 3D printing. I expect 3D printing to improve at roughly Moore's Law rates also, I expect 3D printing to dominate robotic production for a large class of consumer goods within your chosen time-frame, but high-precision stuff that's tough enough for aerospace will be hard. It is possible that it won't work well enough until we can do atomic-level assembly (not just putting an atom in the right place, but putting it in the right place with the right bonds); that's hard. No, not impossible. Just hard--much harder than the industrial-robot approach for almost any individual problem (e.g., blisks.) . Consider Robot Sales Up 33% in North America in First Nine Months of 2007 - Robotics Online:
Among the best performing non-automotive markets this year are life sciences/pharmaceutical/ biomedical/medical devices (up 20%), food & consumer goods (up 15%) and plastics and rubber (up eight percent).Of course those are sales, which as they admit are driven by cycles as well as by fundamental change; I expect many of those numbers to drop over the coming year, even if the actual-recession trade price is still under 50%. In the longer run, though, increasing required precision will force less manual processing, not more: Perrett is quite fundamentally wrong. I think. Your blisks will be produced automatically -- probably within your rocketry career (You don't personally get involved with blisks, do you?) Never mind, the point is increasingly automatic production, especially of the means of (increasingly automatic) production. Even though I did give my youngest a soldering kit for Christmas, and I'm hoping to get her to use it.
On the other hand, I think you're surer of what you're sure of than I am sure of what I'm sure of, and you may be right.
Or then again (and again and again), maybe not. Happy New Year!
Labels: 3Dprinting, aerospace, robotics, singularity