Ethics of the dangerous present and the hypothetical future

Embarcadero_27I need to reiterate an obvious truism: science peels back the unknown, producing new knowledge and changing our perception of the possible. Every bit of new information is individually inert and blameless, but when humans choose to act on scientific knowledge, fundamental facets of nature are seen in a whole new light. Over the latter half of the 20th century, perhaps no field was more emblematic of the dichotomy between great good and great danger than nuclear physics; the same knowledge that lead to abundant nuclear power also lead to ruinous nuclear weapons. Though the developments of other fields may not be so dramatic and poetic, the ethics underlying technological advancement are an important issue. I am frustrated that scientific ethics so often abandons the issues resulting from present-day science to dance through the realm of the science fictional.

Recently, I read this article by Huw Price in the New York Times with a mixture of excitement and disappointment. His particular focus is on artificial intelligence research and thus on the possibility of a technological singularity-like event. Sometimes derided as the “Rapture of the Nerds,” the singularity refers to a time when humans first develop an AI smarter than themselves, which will (in theory) exponentially improve itself. This massive intelligence could potentially render humans themselves superfluous—or at least, no longer the dominant will on Earth. In any case, the implications of such an event are, of course, enormous.

The article frustrates me. The ethical implications of advanced technologies that exist right now, from the health effects of carbon nanostructures to whether the contents of your genome are your intellectual property, are underexposed. In contrast, the Singularity is a science fiction that nonetheless dominates our imaginations.

Price notes Bertrand Russell’s efforts in the context of nuclear science, but the comparison couldn’t be less apt. Constructing a sound ethical argument, rather than panicked hyperbole, relies on understanding the specifics of a technology. I don’t resent, or want to suppress, those who contemplate the implications of potential scientific breakthroughs. I do, however, have a problem with those who argue for the dire consequences of the not-yet-possible. It hardly makes sense to institute the Turing Police when there is no universal agreement on whether the sort of Singularity-inducing AI Price fears can even exist to begin with. There are so many issues that deserve so much more attention; real science, happening today, needs the skill of philosophers like Huw Price far more than the hysteria-inducing “maybes” of tomorrow. More to the point, historical failures to understand the implications and consequences of then-current technologies (e.g. tetraethyl lead, CFCs, etc.) are far more harmful than the failure to anticipate the future ethical challenges of hypothetical developments.

Image courtesy Decaseconds Photography.

Leave a Reply