I’ve skimmed through the parts of The Visioneers that Amazon lets me read online, and it strikes me that I could have written this book myself. I’ve witnessed or at least followed the events it describes, starting with my membership in the L-5 Society in the late 1970′s and my involvement in the cryonics subculture starting in the 1990′s. I’ve even met or I know some of the people mentioned in this book.
It seems to cover a lot of the same ground as Ed Regis’s Great Mambo Chicken published in 1990, but with the benefit of an additional 20 years added to the baseline to show that many of these futurist ideas from the 1970′s and 1980′s – centered around Gerard K. O’Neill and Eric Drexler – which generated such enthusiasm at the time apparently don’t work.
Unfortunately cryonics has gotten mixed up in this bad futurology, when cryonics has an independent set of problems and ways of trying to solve them. The association has apparently damaged cryonics’ credibility as well.
So I imagine that someone will write a similar book about 20 years from now about (the probably dead or cryosuspended) Ray Kurzweil and Eliezer Yudkowsky to show how they misdirected a generation of geeks with their futurist fantasies the way O’Neill and Drexler did in the 1970′s and 1980′s.
]]>http://www.cryonics.org/chap11_1.html
“It should be amply clear by now that the immortal superman represents not just a goal, but a way of life, a world-view only partly compatible with today’s dominant ideologies. We might call this fresh outlook the new meliorism, of which the cryonics or people-freezing program is an important current element.” [My emphasis.]
I don’t think Ettinger anticipated that transhumanism would instead turn into an unserious, bohemian youth subculture, separate from cryonics, which acts as if holding transhumanist versions of Halloween several times a year will get closer to becoming “immortal supermen.”
]]>One of the blogs I follow argues that America’s ruling class depends on wealth tied to fossil fuels for its support. The nuclear power industry makes the ruling class uneasy because it shifts status to people who earn it through diligence, good character & cognitive ability, even if they come from the gutter, instead of the ones who inherit family fortunes and go to Ivy League schools. As Ayn Rand might have said, nuclear power belongs to the Men of the Mind.
]]>Lastly, there is the Woodward-Mach propulsion that also appears to be real.
]]>As I’m sure you’re all too well aware, the current crop of >Hists
(who have basically subsumed the cryonics crowd by now) **do not**
want to hear this kind of thing.
Say it on the Extropians’ (or WTA Talk, or wherever) and you’ll
be shouted down, and eventually just moderated off.
Read an SF author who makes that the theme of a story, and you
can be sure that SF author will **not** be among the >Hist
canon.
Years ago, I introduced a friend of mine (a very smart guy, who
will himself turn 75 this year) to the Extropians’ list, and
after a few weeks reading it (and even contributing one or two
posts), he pulled the plug in disgust, dismissing it as
“buncha kids wanna live forever”. He was right, of course,
though I’d say “buncha kids wanna live forever **without thinking
too clearly about what _exactly_ that might entail**.
What **exactly**, for example, is going on when a prominent
“anti-death crusader” among the >Hists (I’m not going to name
the name here) uses the death of a family member — a very young family
member who just **committed suicide**, for crying out loud –
as an excuse to redouble his (somewhat self-aggrandizing, but what
else is new?) rhetoric? But — what about the **suicide**? what
made your family member want to do that? Did you even see it coming?
What does that imply **generally**? — Yes, immortality is there
for the grabbing for those who remain desperate enough to live;
and as for the rest — to hell with them? Is that how you feel
now about your family member?
But no. Off limits. Not even acknowledged publicly (the “suicide”
verdict came from independent news sources).
Yes, Mark Humphyrs (among many others) spoke about this
back in ’97:
http://computing.dcu.ie/~humphrys/newsci.html
————-
“[I]f some of the intelligence of the horse can be put back
into the automobile, thousands of lives could be saved,
as cars become nervous of their drunk owners, and refuse
to get into positions where they would crash at high speed.
We may look back in amazement at the carnage tolerated
in this age, when every western country had road deaths
equivalent to a long, slow-burning war. In the future,
drunks will be able to use cars, which will take them home
like loyal horses. And not just drunks, but children, the
old and infirm, the blind, all will be empowered.”
————-
Google, of course, is actually working on this (receiving the
technological torch from Carnegie Mellon University, I guess), but they
get no credit for it from >Hists at all, as far as I can tell.
Driving a car is much closer to having an infrastructure for
intelligence than parsing human language or doing math, IMO.
“Being precedes describing,” as Gerald Edelman once put it.
There was a work of fiction published — oh, back in ’96 I think:
_Society of the Mind_ by Eric L. Harry (not to be confused with
_The Society of Mind_ by Marvin Minsky ;-> )
http://www.amazon.com/Society-The-Mind-Eric-Harry/dp/0340657243/ .
It’s usually classified as a “thriller” rather than pure SF, but
it has not just self-driving cars, but some interesting (and
sometimes chilling) scenes involving the training of robots to
interact both with inanimate objects and with living things.
Much more insightful than the stuff that was being posted to the
Extropians’ at the time about AI.
Really — if you take some of the “Singularity” discourse at face value –
and a lot of >Hists do, and are unwilling to acknowledge **any** limits:
“If we can imagine it, it can be done!” — then why even bother with
suspension?
Folks at the end of time will be able to look back with a super-telescope
and interpolate the position of every atom in the universe at any point
in time, and resurrect every virus, every amoeba, every dinosaur,
and every human who ever lived.
Folks will be revived whether they were cryonauts or not! (Sounds
something like Heaven, dunnit?)
Of course this was Frank Tipler’s idea, and in fact there was a bit
of it in Stapledon’s _Last and First Men_ and _Star Maker_ — not
literal resurrection, but the Last Men could telepathically contact
humans from earlier ages in order to “understand” and “appreciate”
them, and thereby enhance their significance in the history of the
universe, or at least enhance the Last Men’s understanding and appreciation
of their significance in the history of the universe, or
something. Something like the Mormons baptizing the dead.
No, there are other necessary ingredients — **emotional** ingredients –
including a usually unexamined and unacknowledged dynamic between guru and
follower(s), that are required to found a cult (and not just to be
invited to appear on a stage at TED).
The guru really needs to to have an irrational degree of self confidence,
and an attitude of overweening haughtiness that would engender mockery
in an unsusceptible audience, but for individuals susceptible to
the “guru whammy” elicits an emotion that can be as intense as romantic
love. (See, for example, Andre Van Der Braak’s _Enlightenment Blues_.)
Which makes it somewhat ironic that Yudkowsky credits (or credited,
15 years ago in a post on the Extropians’ list) his initial decision
to “devote his life to bringing about the Singularity” to reading
Vernor Vinge’s _True Names_.