Quantcast
Channel: Brian Frank » science
Viewing all articles
Browse latest Browse all 20

Transcendent Man Delayed

$
0
0

Just noticed there’s a new documentary about Ray Kurzweil and his big ideas (transhumanism, artificial intelligence, technological singularity, etc.). The movie’s called Transcendent Man:

… offering [Kurzweil's] vision of a future in which we will merge with our machines, can live forever, and are billions of times more intelligent…all within the next thirty years.

I saw him talk about it in a Charlie Rose segment and I got thinking…

I want to be fair because I think it’s worth considering all the possibilities. I also think there’s a risk of getting carried away by AWESOME IDEAS. But then again I think the risks are exaggerated by a mainstream cultural bias for thrills, glorification of the “human spirit” and badass CGI baddies annihilating cities with ion canons and other sorts of heavily art-directed doomsday scenarios.

And that cultural factor, I think, is more important to the future of technology than people realize.

There are all kinds difficult-to-foresee factors that can (or inevitably will) disrupt expert predictions, which is why experts keep getting things wrong (read Dan Gardner’s Future Babble).

Remember past predictions about overpopulation and resource scarcity. It’s natural to look at a graph of population growth and imagine that one day it’ll get so high that we can’t sustain ourselves, but what early theorists like Malthus didn’t anticipate was that population growth would plateau in societies that achieve the degree of prosperity we have now. Not only do we keep coming up with clever ways to use resources, but we’re so busy inventing things and making things more efficient (i.e. pursuing careers) that we have less need — and less desire — to have kids, so the population bomb hasn’t gone off and we’ve got little reason to believe today’s prophets are right when those in the past were so wrong.

Likewise, it’s natural to assume that processing speed, memory capacity, etc. will keep going up the way they’re going now until they reach their intrinsic limit or culmination, but that virtually assumes that technology has already unhitched its fate from humanity — as if to assume a kind of singularity has already happened — as if the machines are capable of determining their own future without our intervention.

But technology still needs us — it needs us to develop and test it, to write and enforce policies, fund it, buy it, use it and sell it, talk to friends and audiences and Twitter followers about how awesome it is — and all of our wet and periodically messy human variables aren’t stable enough to base solid long-term predictions on.

Now look at how many people have deep distrust of science and technology (or at least espouse distrust of technology as part of some conflicting agenda).

There are the fringe types — Unabombers and Al Qaedas — but even more worrying are popular protests against vaccines, wifi, and worse: serious legislative battles over the teaching of evolution, genetic research and climate science.

So as long as we’re making big generalizations, let’s look at the possibility that the more sophisticated science and technology become, the more we’ll have to address knowledge gaps, and the more likely people are to distrust progress — and this limitation could preempt the realization of true artificial intelligence, singularity and whatever else tech-focused futurists come up with.

Progress isn’t only limited by what technology itself is capable of, but our collective will to take it that far.

Whether or not you agree, there’s still the problem of a pretty widespread anti-scientific sentiment, not in the future but happening right now. It might be worth spending a little less time dreaming about what can be, at least to preserve the progress we’ve already made.


Viewing all articles
Browse latest Browse all 20

Latest Images

Trending Articles





Latest Images