Tag Archives: computer science

Behind the Science: Infinite Russian cats–part 2a of 3

What does it mean that, “My computer can do what your computer can do?”  My computer can play Crysis, and yours probably can’t, but is that really a meaningful distinction?  My computer has a bunch of ports and things on it—USB, HDMI, etc.—and there’s probably a bit of overlap between my computer and yours there.  But my computer could be missing most, if not all of its ports, and it could definitely still be said to compute.

You might have a Mac, and I have a PC, but again, that seems like a superficial difference—while there may be certain programs that only run on PCs and not Macs, it’s not like a similar sort of program couldn’t be written for a Mac that did the same thing.

We could play this little reductionist game all day—winding down past the processor of the computer, comparing flip-flops and NAND gates, past transistors, down to the electrons themselves—those tireless carriers of the binary alphabet of our information age.

But what’s the difference between electrons carrying out computations versus me carrying out a computation?  After all, given enough time, I (or perhaps, Suzie) could manually reproduce anything either my or your computer is capable of computing.  We’d just have to follow the correct rules (and possess a sufficient amount of paper…and pencil lead).
READ MORE ARTICLES

Ethics of the dangerous present and the hypothetical future

Embarcadero_27I need to reiterate an obvious truism: science peels back the unknown, producing new knowledge and changing our perception of the possible. Every bit of new information is individually inert and blameless, but when humans choose to act on scientific knowledge, fundamental facets of nature are seen in a whole new light. Over the latter half of the 20th century, perhaps no field was more emblematic of the dichotomy between great good and great danger than nuclear physics; the same knowledge that lead to abundant nuclear power also lead to ruinous nuclear weapons. Though the developments of other fields may not be so dramatic and poetic, the ethics underlying technological advancement are an important issue. I am frustrated that scientific ethics so often abandons the issues resulting from present-day science to dance through the realm of the science fictional.

Recently, I read this article by Huw Price in the New York Times with a mixture of excitement and disappointment. His particular focus is on artificial intelligence research and thus on the possibility of a technological singularity-like event. Sometimes derided as the “Rapture of the Nerds,” the singularity refers to a time when humans first develop an AI smarter than themselves, which will (in theory) exponentially improve itself. This massive intelligence could potentially render humans themselves superfluous—or at least, no longer the dominant will on Earth. In any case, the implications of such an event are, of course, enormous.
READ MORE ARTICLES

Anne Hoey and John Benton discuss artificial intelligence in the military

Over fifty years ago, scientists began to discuss the possibility of designing a pseudo brain that worked at the capacity of a human brain, and the field of artificial intelligence (AI) was born. Although they brutally underestimated the complexities associated with such a task, they were right about the array of conceivable uses—especially by the United States military. Two scientists who partook in the golden era of AI research  in the 1980s, Anne Hoey and John Benton, formerly of the Center for Artificial Intelligence (CAI) in the United States Army Topographic Engineering Center (TEC), have agreed to share top secret information with Berkeley Science Review.
READ MORE ARTICLES