[Introduction; Round 1; Round 2: Libertes, Precautiones; Round 3: Libertes, Precautiones; Round 4; Single page]
by David Severa
Something may be desirable without being probable, and pursuing the improbable does not necessarily come at the expense of crafting second-best fallback solutions. I’m not under any illusions about the difficulties involved, but I still think that the fight is worth it. And while I’ll push for what I believe in, I’m certainly glad that there are others out there thinking about what to do if we fail.
Some arms control agreements work, and even if they’re imperfect they can still restrain behavior and prevent arms races. For instance, most but not all nations have signed the Chemical Weapons Convention. While nations like North Korea and Egypt haven’t joined, and nations like Syria have been found in flagrant violation and only weakly held to task, the Convention hasn’t collapsed. In fact, “65,720 metric tonnes, or 90%, of the world’s declared stockpile of 72,525 metric tonnes of chemical agent have been verifiably destroyed.” In other words, countries are compliant even knowing that others sometimes aren’t. Similarly, agreements like the Nuclear Non-Proliferation Treaty haven’t stopped proliferation wholly, but the spread has been slower than might be expected, and the international community has been able to successfully (hopefully) pressure Iran on its nuclear program. As to eugenics, I see no reason why the occasional violation (likely to be small-scale for the most part) should trigger collapse of the whole edifice.
Now, those agreements were signed because states saw them as being in their interests, or at least not especially opposed to their interests. Chemical weapons aren’t actually particularly effective. Nations protected by America’s nuclear umbrella have good reasons to support non-proliferation that might not hold in the future. And so on. Will countries see it in their interests to ban eugenics? Much harder to say. Given Europe’s abhorrence of even just genetically altered food, they seem likeliest to support a ban. Of large nations, perhaps China seems least likely. We can at least guess at how social forces will typically be arrayed. In a given nation the business community and military, both seeking efficiency and relative advantages to competitors, seem naturally friendly to eugenics. Many religions seem naturally opposed. NGOs and the like would perhaps be split, with public opinion determining which side is stronger. And how will public opinion fall? Who knows, but a concerted effort to inculcate moral revulsion could have an effect. Without public opinion on our side the fight would be doomed, given that the elite will likely be favorably disposed. More than anything else, that’s what success will come down to.
But will public opinion have much sway in authoritarian countries like China? No ruler can avoid considering popular opinion altogether, but some governments have a freer hand to do what they want and to control the media so as to control the public. A nation like China would need to either shed its authoritarianism or have its leaders be convinced that a eugenics program was not in their interests, for instance as a destabilizer to their rule. I’m not sure which possibility is less likely and this is the biggest sticking point for any worldwide ban on genetic engineering. In most nations, at more or less the same time, there will need to be a confluence of support, and that relies on factors that are outside the control of activists. That doesn’t mean we shouldn’t be waiting to pounce if the moment is right, and it doesn’t say anything about the desirability of a ban, which is the question under discussion. As I said, a ban would be best, but other paths should be considered simultaneously.
The question of how long we can expect such an agreement to last, or how long it should last, is an interesting one. If no actions are ever taken to edit the human genome, then over tens or hundreds of thousands of years natural selection will change us itself. (Though maybe humans will prove as durable a form as sharks.) But that’s well beyond the scope of our discussion. I won’t say forever, because I don’t know what social technologies the future will develop. Maybe the anarchists will work something out and we’ll live in an individual-choice utopia where coercion is but a distant dream. I doubt it, but who knows? The ban should be indefinite, at least.
Libertes raises an objection to this, which I’ll take the liberty of rephrasing: We should consider starting eugenics programs now, while there are people raising ethical concerns who will spur us to act more responsibly. If, however, we create a strong norm against eugenics as being beyond the pale, then those in the future most likely to violate such a taboo are more likely to be evil in other ways. By passing up the chance to take control now, we leave a void which will not be filled by those we would choose. It’s a clever argument, but I wouldn’t put too much weight on it. In those more utopian futures people may also choose to end limits on genetic engineering, so you have to decide which sort of future seems more plausible. It’s not one-sided. I’d say the dystopian too, but not with enough certainty to influence my actions in the present.
My real objection is that we are the ones living in that dystopian future where norms against eugenics are breaking down in a way that the unscrupulous and evil will take advantage of. I’ve already made my case to that effect.
And my case still stands. If it were truly impossible, then there would be no case for prohibiting eugenics. But it’s just very difficult, and pushing for a ban doesn’t rule out second-best regulations. In fact, by moving the Overton window, we may be strengthening whatever future regulations will look like if nothing else.
One final point. At the very beginning of our discussion Libertes states that “My starting assumption is that activities shouldn’t be banned without clear and pressing reasons for doing so.” Now that I’ve sketched out a variety of possible irreversible harms that can result from eugenics I’ll argue that this is wrong. In many domains this might be the right attitude, but not here.
I haven’t yet elaborated on the ultimate damage that may be done. My biggest worry is that, in time, some will to strive may be lost. That is, many of the problems I’ve mentioned could conceivably be reversed. But what if the will to undo mistakes is lost, or the ability to recognize the mistakes we’ve made? What if we make a reduced humanity that can’t even understand its own condition and thus never seeks to restore what’s been lost. Excessive conformity, or excessive obedience, or excessive passivity, I don’t know what the cause might be, but it’s why I worry so much. It’s not just about harms, but irreversible harms. Here’s my own version of the precautionary principle:
Changes to the current order that may pose existential threats to human survival or flourishing must be rigorously proven to be safe before being permitted.
This seems quite strong and reasonable. Of course “may” is a slippery term, but without getting into the exact boundaries, surely wide-scale eugenics fits as well as anything else in history.
Moderator’s comments for Round 3
- Can parents be trusted to make the right choices for their children?
- Round 1: What further evidence might be brought to bear? For instance, do parents accurately guess how happy the life of someone with a given disability is likely to be?
- Is there a right to an unaltered genome?
- Round 2: Both sides agree that framing this discussion in terms of rights is unhelpful.
- Will positive or negative externalities predominate?
- Round 1: Libertes has argued that there are positive externalities to genetic engineering, while Precautiones has pointed to negative ramifications such as loss of human diversity, permanent inequality, and de facto government control of reproduction. Are negative or positive trends likely to predominate?
- Round 2: Both of you have identified possible positive and negative externalities that might be produced, but is it even possible to weigh these against each other with so little evidence right now?
- How much freedom from government intervention is possible or desirable?
- Round 1: Are the practical limits to the libertarian approach listed by Precautiones – state takeover, the poor lacking access in practice – real? If so, are they avoidable?
- Round 2: Both of you assume that government regulation of genetic engineering would necessarily be harmful, but perhaps you could flesh out what those harms would be. Is democracy really so inadequate a check as you both seem to think?
- Is a total ban on even mere research possible?
- Round 2: Does this include research on genetic diseases?
- Round 3: Despite different approaches, both sides seem to agree that the odds of a total ban are small. The question thus becomes whether or not Precautiones is right that a ban is nevertheless worth pursuing, and that depends firstly on if genetic engineering will be good or bad (as we’ve already discussed) and secondly on if the opportunity cost of pushing for a ban is too high.
- Trade-offs
- Round 2: Are the trade-offs between traits Precautiones discussed likely to be real? Is the vicious cycle of competition? Can we know that right now?
- Round 3: If intelligence is mostly correlated with other, positive traits, that’s at least circumstantial evidence that such trade-offs won’t be too significant. Is there a rejoinder to this?
- No time like the present?
- Round 3: Are there reasons to think that the present is a better or worse time to begin a genetic engineering program than the future? Surely this is simply unknowable to any useful degree?
- Where does the burden of proof lie?
- Round 3: Should genetic engineering be legal or banned by default? And how much evidence is needed to allow/disallow it?