I'm a strong believer in actually building things
rather than just talking.
But like everyone else in the field,
I've got some informal thoughts on what it's all about.
Here's my current set of conclusions.
I have written a popular science talk on this.
Despite my disagreements with symbolic AI,
I still accept that strong AI (embodied) is the reasonable working hypothesis
if you accept materialism.
I find the arguments of Penrose, Searle and Bringsjord unconvincing,
Dreyfus is alright with me,
and he exempts my type of AI from his criticism.
Edelman is alright too - it's just that he doesn't realise he's one of us!
Daniel C. Dennett's classic 1989
review of Penrose
shows that AI is just the consequence of ordinary scientific materialism,
and any alternative better fit into evolutionary materialism as well as AI does.
I feel the same way about
Freud (as I do about Edelman).
I find Freud
a source of good metaphors
- multiple minds in one body, for example.
I can even agree with Minsky that Freud recognises the sheer complexity
of the society of mind.
But most of Freud's specific theory
- the exact number and role of such multiple minds,
childhood sexuality and feelings towards parents,
the usefulness of
and so on
- is, to put it politely,
simply assertion for which there is no evidence.
Psychology in general lends itself to mythology and
pseudoscience like Freudian theory
precisely because we have no proper models of how the brain works yet.
We simply have to be patient.
Consciousness was regarded for many years as a subject about which science could not yet say anything meaningful.
Despite all the current hype, I see little reason to believe that that situation has changed.
Incidentally, the reason most critics of AI get it wrong is because their concept of AI is so narrow.
They are generally unaware of the biologically-friendly camps within AI
(the stuff I link to here).
For me, AI cannot just be about getting systems to do apparently intelligent things
- for then all of computer science could be AI.
The functional approach, started by Turing,
of building systems to do things by any means possible, is worthwhile enough,
but it is engineering - plain programming, hacking.
More interesting to me is the question: How do living systems do it?
I want to see an AI that can make some contribution to cognitive science
- an AI that works
the way that living things might plausibly work.
I don't think we have to be over scrupulous about this.
The standard I would set is simply that an AI model should be
I want to see the path along which it could have evolved incrementally
from simpler models by natural selection.
For me, symbolic AI currently fails this test.
(How do symbols evolve from nothing in the infant mind?)
As does supervised learning.
(Who is the teacher supplying explicit I/O pairs?)
I would like to think that I am working on
evolutionarily-plausible methods of
I also think that a sound model of human-level intelligence
has to wait until we have a much better model of the underlying animal.
It seems to me that language etc. is something which requires as a substrate
an already fully-functioning, mobile, pre-linguistic creature.
I don't see how you can be intelligent without being a mobile, tactile animal first
(that is, I don't believe in the possibility of disembodied, linguistic-only, Eliza or
HAL like intelligence).
In lyrical terms, if we ever want to make a H.sapiens,
the challenge now is to make a H.erectus.
While I think AI is possible in theory,
that is not the same as saying that society will ever actually build
a race of AI's in practice.
Here's why the AI project, while it will get a good way along the road outlined above,
will start to stall before full AI is ever reached:
Think where we came from. We had a whole planet to play with.
We had a million years of time to develop our tools and our culture.
There were many thousands to millions of us.
How can we expect to build single isolated AI's, alone in laboratories,
and get anywhere.
They can't join in our culture because they are not like us
(they don't understand our sensory world or our life experiences).
They can't develop their own culture because they are not like each other.
Even if we made them all identical, so they could talk to each other,
they would need a lot of time and a lot of space to develop any rudimentary culture at all,
and they would probably fail.
The whole concept should make you despair.
The planet is full. There's no space for another species and culture to develop.
There's more. Even if you somehow could give them the numbers and space they need to grow and develop,
the process would not be fun, for us or them.
The futurists, like so many people, fail to grasp the basic Darwinian world view
that you can't get from here to there without passing through the mess in between.
You can't get from simple robotic animals to the enlightened high intelligence of
Hans Moravec's Mind Children
without first passing through absurd superstitious savagery.
Long before our AI's discovered science
they would discover superstition.
They would have their own religions, their own tribal hatreds, their own holocausts.
They wouldn't listen to us telling them we'd been there, done that,
any more than the
currently reliving the Middle Ages are listening to the west today.
So while I love Moravec's originality, I'm unconvinced that non-human machines will take over the world.
They will run into many of the same boring troubles that we have run into.
Read Jared Diamond on evolutionary history
to understand how easily cultures fail,
and how hard our human success was.
So AI is possible in theory (materialism is true)
but pretty much impossible in practice (the planet is full).
And even if it were possible in practice, the process would not be fun,
and could easily stall before full intelligence was reached because of non-cooperation by the creatures themselves.
There's nothing wrong with speculating about the long-term future,
as Moravec brilliantly does,
and as do some others.
Unfortunately, much modern sci-fi,
and second-rate speculators like
have made such long-term speculation seem intellectually disreputable.
So what's the answer?
The answer is that we don't make AI's, because there's no room for them.
The answer is that
"Robots shall inherit the Earth; and they shall be Us ..."
This would be the nicest future.
No AI's as a separate, enemy species.
No Terminator wars.
Rather, AI's being us.
Us human beings, finally figuring out how we work
and gradually, over hundreds of years, becoming full AI's ourselves.
Us having all the fun, us getting to go to the stars
instead of our ungrateful creations.
And of course this also means: Us going immortal.
The mind, the body, and why they die, are scientific problems and some day they will be solved.
Not in our lifetimes, not in fifty years or any of the other absurd estimates made by AI promoters.
Two hundred years would be a more reasonable estimate.
But someday, if civilization continues, someday these problems must fall.
Immortality is inevitable,
not in an imaginary spirit world, but here in the real world.
The only world there is.
Well what alternative future do you imagine then?
Do you really think that humans are going to go on being forced to die by nature
for the next thousands and thousands of years?
Humans will live forever, but our descendants, not us.
Right now everyone dies, and most people are so blinded by their culture
that they accept this as natural and even good.
They are free to die if they like it so much,
but some of us would be quite happy to be immortal (or at least to have the choice)
and we think wistfully of that future day when this will become possible.
The methods below offer only the slimmest of chances of surviving to see that day,
but the alternative is no chance at all.
Anyway it's great to see people challenging this huge taboo.
Death may be natural, but that doesn't mean we have to accept it.
We're in charge now, not stupid uncaring nature.
Cryonics is the standard approach to this problem
- preserving the material itself by freezing.
There are lots of problems with this - you need a carefully organised peaceful death,
you may run out of people to pay for your ongoing suspension.
A workable alternative to cryonics could be to take an entire
scan of your neural structure
and simply store the data until such time as it can be reconstructed.
This would be far less messy than cryonics.
You could do the scan during life and go ahead and die naturally,
and you don't need anyone around to pay for your ongoing suspension.
You just sit on a shelf until such time as you are read back out into a body.
If no one is interested in rebuilding you for centuries, that's fine.
You can wait.
My family tree pages are really a labour of love
for ordinary, mortal, unknown humans,
those who will have no biographies written about them,
those who died too long ago to be missed.
No matter what disasters happened to your life, or how early it ended,
these carefully compiled pages show that you too once lived,
your heart once beat,
you were young and anything could happen.
And then the moment passed and you vanished into the night.
Someday, when death is voluntary, there will be a record of the earliest-born person in history to go immortal.
The methods above may be hopelessly optimistic,
but on the other hand, it is just about credible that that person's birthdate might be as early as the twentieth century.