I said in Lancet Redux that I think that if Bush's "Big Bang" fails, "we will not be thinking of 655,000 as a large number of dead."
As TPM Barnett noted in the Knoxville Gazette lately, we are roughly ten years from genetically engineered threats from non-governmental organizations -- and roughly ten years from being able to detect and maybe deal with them.I want to be a bit more concrete there, adding to my comments to Barnett's Thousand Flowers post.
In general, I believe that Barnett is right to say that technology favors the good guys, but I am not confident that it has to work that way in any given case. (I've been describing myself as a "moderate techno-optimist" for quite a few years now.) Let me suppose that the technology to create deadly viruses and the technology to counter deadly viruses are developed more or less in tandem. Let me suppose that ten years from now, the bad guy can start by crossing a lot of viruses' RNA, randomly mutating segments, coming out with a million or so which are then tested on cultured human cells to come out with a few thousand strains that look like they might be good (i.e., bad), in the expectation that a few hundred of these will actually be deadly.
(Or maybe all the design can be done by simulations, and then he prints out RNA sequences which will in fact be deadly.)
I am not supposing that the good guys have been idle; indeed, I'm supposing that their progress has been greater than the baddies'. At the time in question, the good guys have developed fabulous technology, technology much more advanced than the bad guy's, technology by which any particular strain (HIV, a specific influenza, smallpox) can be halted in a week or three. Given blood/tissue samples from an infected human, they can quickly isolate any active viruses, they can solve the genetic codes thereof and print out vaccines.
At the moment, I think that's an extremely optimistic view of a stage of technology that we will reach, maybe in ten years or maybe in twenty. Who has the advantage at that stage? Well, my feeling now is that the bad guy is way ahead, able to kill very large numbers -- unless you can bring in other technology to stop the bad guy ahead of time. Surveillance technology. This is technically feasible (although legally problematic for some countries) in the Core, but it just doesn't help that much when talking about North Korea or Syria. Later stages, in which nano or robotic technology add to the genetically engineered virus threat (and genetically engineered fleas for a delivery mechanism, and so on) make the situation even worse, if the bad guy has a protected zone in which to develop stuff. The Barnett prescription solves my problem: we have to convert Gap to Core. And then we have to change the US Constitution to distinguish between liberty and privacy, so that we can improve security by reducing some kinds of privacy but not liberty.
How? One possibility: an agency which knows all about you but can't do normally do anything about it, can't even share data with other agencies -- but all of its data about you which is more than a year (2 years?) old is accessible to you. The only crime it can report is (conspiracy to commit) mass murder. Well, something like that. And there is a special court in which you can protest "But I'm just an ordinary thief/drug lord/murderer" and they can send an agent of this agency to jail for abusing his surveillance technology without ever exposing you to conventional courts. Accountability uber alles.
And how to convert Gap to Core? As Barnett says,
everytime somebody who's thought it through lays out a vision for how we fix the mess we're in, they basically recite the [SysAdmin] concept...Yup. I've believed that since long before I heard of Barnett.
(but then again, maybe not.)