Wednesday, February 25, 2009

The Coming Technological Singularity: How to Survive in the Post-Human Era by Vernor Vinge

I am currently reading the (so far) excellent book Wired for War and a Vernor Vinge's 1993 essay about technological developments changing the world was mentioned as a seminal work. I decided to read it and I can see why it had a large impact. You can find the (relatively short) article here.
I've decided to cut and paste various parts below with minimal commentary indicated by dashes, but you might as well just read the whole thing.

Vinge's Thesis: "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."

This can happen in four main ways:
"There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):
  1. There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.)
  2. Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
  3. Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
  4. Biological science may provide means to improve natural human intellect."

-I believe that the order of events will be 3, 4 and then 1. I'm not sure about 2, but probably most likely to occur between 4 and 1. I feel relatively confident because one could argue that 3 and 4 are going to happen soon if they haven't already.

Vinge quotes Good:
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. "

"We will see automation replacing higher and higher level jobs."
- I believe this is a trend that has been happening for quite some time, mainly with the industrial revolution. Additionally, there is a lot of 'human automation' in China and India that is doing the lower level work (i.e., manufacturing and basic office processes) and even here they are aided by machines at almost every step.

Vinge wisely acknowledges that "that we might never see a Singularity. Instead, in the early '00s we would find our hardware performance curves beginning to level off -- this because of our inability to automate the design work needed to support further hardware improvements. We'd end up with some very powerful hardware, but without the ability to push it further. Commercial digital signal processing might be awesome, giving an analog appearance even to digital operations, but nothing would ever "wake up" and there would never be the intellectual runaway which is the essence of the Singularity. It would likely be seen as a golden age ... and it would also be an end of progress."
- I especially liked this part because it made a falsifiable prediction. It is still the '00s and our hardware performance has not leveled off, but has continued to increase at the same exponential rate. Further, he does observe that if the technology leveled off, it would be the end of (technological) progress. Perhaps that is what our world needs so we will have a chance to adapt to what we already have, but that is entirely unrealistic.

"Eric Drexler has provided spectacular insights about how far technical improvement may go. He agrees that superhuman intelligences will be available in the near future -- and that such entities pose a threat to the human status quo. But Drexler argues that we can confine such transhuman devices so that their results can be examined and used safely. This is I. J. Good's ultraintelligent machine, with a dose of caution. I argue that confinement is intrinsically impractical."

What about programming in rules to mitigate the power and freedom of these machines?
“I think that any rules strict enough to be effective would also produce a device whose ability was clearly inferior to the unfettered versions (and so human competition would favor the development of the those more dangerous models).”
-I tend to agree that at least some people somewhere will try to build superintelligent machines even if there are laws against. Additionally, the notion of confinement or control is a fascinating one because it seems like it would be difficult if not impossible to achieve. If it is impossible, what then?

“The physical extinction of the human race is one possibility.”
-It is important that this is acknowledged as a possibility as one can get carried away with utopic dreams.

"(I. J. Good had something to say about this, though at this late date the advice may be moot: Good [12] proposed a "Meta-Golden Rule", which might be paraphrased as "Treat your inferiors as you would be treated by your superiors." It's a wonderful, paradoxical idea (and most of my friends don't believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.)"
- If that rule was followed, wouldn't that be an interesting world! I liked his wording of "the game-theoretic payoff is so hard to articulate."

"And it's very likely that IA [Intelligence Amplification] is a much easier road to the achievement of superhumanity than pure AI. In humans, the hardest development problems have already been solved. Building up from within ourselves ought to be easier than figuring out first what we really are and then building machines that are all of that."
- This makes obvious sense to me.

"Allow human/computer teams at chess tournaments."
- I think this is a good prescription/recommendation. Interestingly, it had already happened over a decade ago (but that is still five years after Vinge's proposal).

"The power and influence of even the present-day Internet is vastly underestimated."
-One could duh to this, but I don't think that is fair. My own personal recollection is that I thought the Net was going to be big, then it really wasn't because there was not much on there and little to do. Then just several years later... well, I'm writing this blog aren't I? (A blog I started 5 years ago.)

"One of my informal reviewers pointed out that IA for individual humans creates a rather sinister elite. We humans have millions of years of evolutionary baggage that makes us regard competition in a deadly light. Much of that deadliness may not be necessary in today's world, one where losers take on the winners' tricks and are coopted into the winners' enterprises. A creature that was built de novo might possibly be a much more benign entity than one with a kernel based on fang and talon. And even the egalitarian view of an Internet that wakes up along with all mankind can be viewed as a nightmare."
-I think he could spend more time here discussing how we might be in a world where the elite gain substantially more intelligence and power, so that we might actually have Gods on Earth among the normals, with abilities to dominate and subjegate.

"What happens when pieces of ego can be copied and merged, when the size of a selfawareness can grow or shrink to fit the nature of the problems under consideration?"
-What a self will be is one of the most interesting things to think about within the notion of a singularity.

"From one angle, the vision fits many of our happiest dreams: a time unending, where we can truly know one another and understand the deepest mysteries. From another angle, it's a… worst- case scenario."
-It is always wise to keep in mind that things can go to a variety of extremes (but just as often seem to end up somewhere in the middle).

1 Comments:

Blogger Timestarved said...

An old 50's movie called "Forbidden Planet" came to mind as I read your excellent post. Worth a look as its story line involves a race that destroys itself at the very moment of its technological pinnacle.

Also,if you have not read it,I recommend Vinge's "Rainbows End".

9:29 AM  

Post a Comment

<< Home