The Basis Of My Futurism
In a previous essay of mine (The Future Of Intelligence),
I specifically disclaimed any attempt to provide evidence for my
thoughts about the future, and about The Singularity in particular.
This essay attempts to present some of my reasoning and
evidence.
This mini-essay can be considered a somewhat less-well-written
version of Xuenay's "14 objections against AI/Friendly AI/The Singularity answered".
You should probably read that too/instead.
Beliefs
My conclusions are based on the following premises. I can offer
only the barest evidence for most of them. Better still, several of
them they are impossible to disprove at this time. Sorry about
that.
The silly little tags are for quicker reference in the
conclusions section.
- SRNPossible
-
Self-replicating nanotechnological robots
are possible. There's actually reasonably solid evidence for this;
Google can help you with this, but you might want to start with Is Molecular Nanotechnology "Scientific"?.
- SRNGraspable
-
Self-replicating nanobots (SRN) are within our grasp, technologically, in
the near-term. Thirty years is, I think, a good upper bound. I would
be unsurprised to see them in fifteen.
- SRNEexistential
-
SRN are an existential risk to the human species. See
Bostrom's Existential Risks Paper,
amongst many others, for discussion on this point.
- SRNWeapons
- SRN-based weapons (or, for that matter, any nanotech weapons,
self-replicating or not) are a fundamentally unstable military
technologies, and have massive first-strike advantages, which makes
them massively more dangerous than current WMDs. Ask google about
{nanotechnology "first strike"}; this has been written about
extensively. See in particular
Nanotechnology and International Security.
- StupidHumans
- Humans are too stupid to be trusted with a military technology
that is an existential risk and has a first-strike
advantage. I argue that this is true prima facia, and that no sane,
intelligent human would argue against it. I'm sure that once this
is posted to the 'net, someone will try to prove me wrong.
- AGIPossible
- It is possible to have a computer-based general intelligence (at
least as general as the average human), of at least human-level general
intelligence. At this time, there is no direct evidence that this
is possible.
The only argument for it is
anti-anthropocentrism, i.e. that it is arrogant to believe
otherwnise because humans can't be that special. On the
other hand, there's no evidence to the contrary, either. AGI stands
for Artificial General Intelligence, by the way, and is used to
distinguish an AI that can have a conversation with you from "AI" in
the sense of "pathfinding algorithms in video games" and such.
- IntIndependent
- The stronger version of AGIPossible: There is no aspect of intelligence, in
any form, that is dependent on its physical sub-strate. Any form of
intelligence that a biological being can have, a silicon being can also
have (even if that means emulating hormones or whatever) and vice-versa.
This includes social intelligence, mathematical intelligence, sexual
intelligence, aggressive intelligence (aka "cunning") and so on and so
forth. Again, no evidence for this at all. Do, however, look at
Xuenay's 14 Objections
to see someone try to present such evidence anyways.
- HumansUnspecial
- The universe is capable of having beings fundamentally smarter
than humans. What "fundamentally smarter" means is up for grabs; I
have a friend that insists that humans are Turing
complete with
respect to intelligence, and that there is nothing that any
intelligence could ever come up with that any fully-functional human
could not understand if they had an indefinate amount of
time to study it. I think he's probably right, but that doesn't
change the fact that one Turing-complete machine can compute both
faster and better than another, and I think that
applies to minds as well. If you want me to talk about this one
more, let me know. No evidence for this at all, again, except the
anti-anthropocentrism argument. No evidence against,
though, either.
- BigIntPossible
- There is little or no upper-bound on the maximum intelligence of
a being, except for physical limits themselves. In other words,
human-level intelligence isn't the ceiling (see point #8)
and there's no hidden ceiling just above human-level. If
there is a ceiling, it's at the point where the being's
computational substrate is so huge (planet-sized or larger) that
light-speed lag starts causing problems. Matrioshka Brains
seem to be the absolutely insane outer limits of this idea. Again,
no evidence. Again, no evidence against, though,
either.
Conclusions
General
- From SRNPossible and BigIntPossible, we have that anything smart
enough can invent SRN more-or-less immediately. By "invent" I mean
"solve all the inherent engineering problems and develop a very
short path from our current technological leval to SRN
production".
- From SRNExistential we have that anything smart enough to invent
SRN quickly is a clear and present danger to the continued existance
of the human race.
- On the other hand, from SRNExistential, SRNWeapons, and
StupidHumans we have that EITHER humans must entirely avoid creation
of SRN, OR something smarter than humans must be the first being on
earth to do so.
- From StupidHumans (more or less, plus general knowledge) we have
that humans are too stupid to avoid creating a technology just
because it might destroy them all.
- Therefore, from the last two conclusions, we have that a
super-humanly intelligent being MUST be the first being on earth to
develop SRN, because a small chance of such a being stopping us from
destroying ourselves is better than the near-certainty that we'll
manage to do so if left to our own devices.
- From SRNGraspable and the current state of the biotech industry,
human augmentation is unlikely to arrive fast enough such that a
super-humanly intelligent human is going to invent SRN.
Keep in mind the gestation time of humans when thinking about this
issue.
- Therefore, we must look elsewhere for our super-humanly
intelligent saviour. The only other place I know to look is
AI.
At this point, I've essentially explained the minimum set of reasons
why I give money to the Singularity Institute, and
further why I think that giving money to groups pursuing friendly
general AI is the single most important philanthropic
pursuit possible for a human today. For the more warm-fuzzy reasons,
see my
sysop scenario essay.
Hard Takeoff Conclusions
Hard
Takeoff is the theory that once we have a near-human-level AGI,
we'll have something far smarter than humans very quickly
thereafter.
- From IntIndependent, HumansUnspecial, BigIntPossible, and my
knowledge of computer software, I infer that a (human equivalent) AI
created deliberately (that is, coded more-or-less by hand, without
using genetic algorithms or neural nets or anything like that) will
be able to make itself smarter by examining its own code, modifying
it, and running test copies of itself, in exactly the way its human
creators would. I'm not even going to postulate that the AI would
be much BETTER at coding than its human creators would, but
I do expect this to be true.
General
Intelligence And Seed AI's
advantages section talks about this issue in some detail.
- From the above, the AI will be able to do this (improve
themselves by teaking their code) more than once. From
BigIntPossible, the AI will be able to do this almost indefinately.
Each iteration will proceed faster than the previous
iteration, because the AI is smarter now and because it knows more
about modifying itself. Eventually the AI's intelligence will begin
to be bound by the hardware ey is running on. At that point it will
slow, probably dramatically, but by that time it is plausible that
the AI will have been trotted out in front of enough adoring
audiences (or whatever) that the project will have a substantial
enough influx of cash to stave off this problem
indefinately.
- From various previous conclusions, this process will continue
until the AI gets smart enough to develop SRN (or something equally
powerful) and smart enough to convince its creators to let it try
its hand at its inventions. Note that the AI could be deceiving us
about one or both points, and unlike when dealing with humans, we
would have no secondary clues hinting at the deception. Once both
the developing and the convincing is done, the AI is effectively the
ruler of the world (unless other groups have already developed
SRN-level tech, but from SRNWeapons, if humans have done so, we're
probably dead anyways).
- It follows from the above that it is absolutely vital to be
thinking about Friendliness (that is, the craft of ensuring that an
AI still likes humans even after it is much, much smarter than
humans) in every serious general AI project that is currently
underway (and there are at least three that I know of). This leads
right back to why I give money to the SIAI.