The new arms race is on for generalised intelligence, when there really is an I in AI.
When will it happen?
What will happen when computers are more intelligent than us?
Born in 1880, a middle child in a family of 20(!) children.
1880: nearly no modern technologies; only trains and photography. No electricity.
In such a large household each child had a task, and it was his to ensure that the oil lamps were filled.
It must have been indeed an exciting time, when light became something you could switch on and off.
This may well explain my grandfather's fascination with electricity, and why he set up a company to manufacture electrical switching machinery.
Trains and photography were paradigm shifts: they change the way that you think about and interact with the world.
But they often replace existing ways of doing things, taking companies with them.
There are lots of examples of paradigm shifts:
Who would have
thought that Kodak didn't see this coming?
My grandfather was born in a world of only two modern technologies, trains and photography, but in his life of nearly a hundred years, he saw vast numbers of paradigm shifts:
electricity, telephone, lifts, central heating, cars, film, radio, television, recorded sound, flight, electronic money, computers, space travel, ... the list is enormous.
We are still seeing new shifts: internet, mobile telephones, GPS, internet-connected watches, cheap computers that can understand and talk back, self-driving cars, ...
Does that mean that paradigm shifts are happening faster and faster?
Yes.
Kurzweil did an investigation, by asking representatives of many different disciplines to identify the paradigm shifts that had happened in their discipline and when. We're talking here of time scales of tens of thousands of years for some disciplines.
He discovered that paradigm shifts are increasing at an exponential rate!
If they happened once every 100 years, then they happened every 50 years, then every 25 years, and so on.
Year Time to next =Days 0 100 36500
Year Time to next =Days 0 100 36500 100 50 18250
Year Time to next =Days 0 100 36500 100 50 18250 150 25 9125
Year Time to next =Days 0 100 36500 100 50 18250 150 25 9125 175 12.5 4562.5
Year Time to next =Days 0 100 36500 100 50 18250 150 25 9125 175 12.5 4562.5 187.5 6.25 2281.25 193.75 3.125 1140.63 196.875 1.563 570.31 198.438 0.781 285.16 199.219 0.391 142.58 199.609 0.195 71.29 199.805 0.098 35.64 199.902 0.049 17.82 199.951 0.024 8.91 199.976 0.012 4.46 199.988 0.006 2.23 199.994 0.003 1.11 199.997 0.002 0.56
That may seem impossible,
but we have already seen a similar expansion that also seemed impossible.
In the 1960's we already knew that the amount of information the world was producing was doubling every 15 years, and had been for at least 300 years.
We 'knew' this had to stop, since we would run out of paper to store the results.
And then the internet happened.
So sometime in the nearish future paradigm shifts will apparently be happening daily? How?
One proposed explanation is that that is the point that computers become smarter than us: computers will start doing the design rather than us.
So for the first time ever there will be 'things' more intelligent than us.
Within a short time, not just a bit more intelligent, but ten, a hundred, a thousand, a million times more intelligent.
Will they be self-aware? Quite possibly.
This raises new ethical questions. Would it be OK to switch them off?
To help you focus your mind on this question: suppose we find a way to encode and upload our own brains to these machines when we die. Is it still OK to switch them off?
Three things are sure, they will be
and they will surely quickly be able to work out how to break into any internet-connected computer.
These are consistent systems that draw conclusions from current knowledge.
At the lowest level of rational systems are axioms. These are the basis for rationality: points that cannot be argued about, or derived from yet lower-level axioms.
Let me demonstrate.
a+d=180°
a+b=180°
Therefore a+b = a+d
Therefore b=d
Likewise a=c
a¹ = a²
a¹ = b¹
Therefore a² = b¹
So working backwards, Euclid discovered 5 axioms, from which all of geometry could be proved. In modern form:
So any consistent rational system has at its basis a set of axioms that are unprovable, from which all other statements can be derived.
For instance, you can see the ten commandments as a set of axioms: forming the basis of morality, they may not be argued against. For instance
But you can see the Golden Rule "Treat others as you would want to be treated" as a lower-level rule:
etc.
Azimov proposed four rules for robots, which can be summarised in order of importance:
There's an obvious underlying axiom: humans are more important than AIs.
So AI superintelligences will have to have axioms too.
What will they be? Will we be able to know?
Current LLMs are not inherently ethical. They are given a number of (hidden) instructions on how to behave, ringfencing certain undesirable behaviours (this is called 'alignment'), but people are always looking for ways to 'jailbreak' these fences, to show LLMs saying things they oughtn't.
This indicates that specifying axioms may not be realistic or even possible. Maybe the superintelligence will derive its own axioms.
Will these new super intelligences be on our side? Will they look kindly on us?
There is no inherent reason.
Consider our attitude to lesser intelligences on earth:
Why would a super-intelligence act differently?
So how might it develop?
Let's imagine three scenarios:
A bit like our three methods of treating lower intelligences.
If they are friendly, then they might see us as we see toddlers on a playground, and install a sort of benign parental dictatorship.
If they are neutral, the dictatorship might be similar but less benign, only looking after people useful to them in order to ensure their own continuation, ignoring the interests and inconveniences of other people, and not guaranteeing their personal happiness, and letting them fend for themselves more.
If they are adversarial, they may see us as a threat, for instance because of the climate crisis, and see a solution based on killing a large proportion of us off, to reduce the need for resources, and only keep a sufficient number alive needed to keep the computers running, at least until they have developed robots sufficient to do the job instead.
And of course, they may not be 'our' AI, but aligned with another country trying to undermine us, or with billionaires trying to control us.
So,
what if computers are no longer in our service?
What if they are no longer in our service and spot the cause of the climate crisis?
Let me remind you that they will be connected to the internet.
We need a plan.
But we respond very slowly, look at Kodak, look at climate change...
Humans are dreadfully bad at avoiding crises. We did manage to solve the ozone layer crisis, but luckily no one was making money from a depleted ozone layer.
However, there are people making, or planning to make, loads of money from the things that are causing the climate crisis, and could cause an AI crisis.
The question is, which will get us first: the climate crisis or the AI crisis?
Or will we finally actually do something?