WHAT IS SUPERINTELLIGENCE?

WHAT IS SUPERINTELLIGENCE?

Hello, Amsterdam!

Are you ready for the 29th of November? It is then when your city will turn into a job-seeking hub! Some really cool companies are joining us for Amsterdam Tech Job Fair and Endouble is one of them! Check out what they’ve got to say and book your ticket while they’re still available!

There is a lot of cyclical hype around the concept of singularity or superintelligence. Every couple of years, the concept of singularity resurfaces after (perceived) breakthroughs in the AI field. For Elon Musk, artificial intelligence is about ‘summoning the demon.’ Musk recently advised everyone to view “Do You Trust This Computer”. Also, the dystopian Netflix series Black Mirror is gaining popularity quickly. Moreover, Stephen Hawking warned that AI could be the worst event in history. What are they afraid of? And how will it impact you?

What is singularity?

Singularity is the process of intelligence improving itself. British mathematician I.J. Good was the first to describe singularity in 1965. As he said:

“Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

In other words, at the moment only humans are able to build and ‘simulate’ artificial intelligence. Accordingly, human-level intelligence is the holy grail of intelligence. However, if we can make an AI as intelligent and versatile as the human brain, then it should be able to make itself better. With virtually unlimited computer power, this intelligence will start improving itself extremely rapidly. That is why researchers worry about this kind of superintelligence.

If something is really smarter than us, how do we make sure that it stays aligned with our values and goals? Can it be conscious? And how do we prevent accidentally killing ourselves when the superintelligence misinterprets our goals (or take them too literally)?

Is superintelligence possible?

The concept of superintelligence, intelligence capable of improving itself, is based on a couple of promises/assumptions that are up for debate;

  1. While computers are already very fast, to achieve superintelligence, the amount of calculations likely needed is going to exceed our current capabilities with a magnitude. While we haven’t achieved the physical limits of Moore’s Law yet (the theory that says that chips double in capacity roughly every 18 months), it seems to slow down, at least at the CPU level. However, AI researchers have started doing optimization by turning away from CPU’s, and looking into the GPU, and chips designed specifically for AI, such as TPU.
  2. Literal singularity means that the doubling time (or decreasing the amount of time to become twice as smart) would get smaller and smaller until it reaches zero. This possible development has no precedent in anything we have ever seen before. And it assumes a direct relationship between intelligence and its ability to solve a puzzle. However, we know from history the complexity of problems mostly is not linear. A lot of problems actually become more complex when you reach the ‘end.’ The jump from a 90% to a 99% accurate model is usually bigger than from 0 to 10%.
  3. When do you get diminishing returns from more intelligence? Will an IQ of 15000 perform a hundred times better than an IQ of 150? Even if artificial intelligence gets to a level where intelligence is virtually limitless, it isn’t necessarily going to solve problems faster. As human intelligence is constrained by its surroundings, even the best AI models (like AlphaGo, for instance) still heavily depend on human input for direction.
  4. Will AI have all the data it needs? Weather forecasts would likely be a 100% accurate if their models would be fed with all the right data. But this task is quite linear. When intelligence needs to be broader, as in case of simulating human intelligence, the data input needs to be very precise.
  5. The models also need to be incentivised to become better. Humans have the urge to learn, but would computer models do the same, and at what pace? As soon as the human stops giving it directions to become better, will it continue to do it automatically?
  6. Can a computer become better in creativity and explore non-linear progress? Would an AI try to maximise the output of a horse or would it in the end invent a car?
  7. Would we humans let an AI range freely and explore the limits of intelligence without our supervision? If we would need to supervise it, could we do it in a way that the AI is not constrained by human knowledge?
  8. Is human intelligence in the end completely physical? Or in other words, is there more between heaven and earth? I think this is quite uncontested in the Western world, but some researchers suggest otherwise.

(When) will it happen?

The vast majority of researchers think the problems above can be solved, but to varying degrees. Estimations for when we would reach superintelligence vary wildly. Google researcher and famous futurist Ray Kurzweil predicts that it will be as early as 2029. But there are also researchers arguing that it will take a long time, or will never happen. At a conference in Puerto Rico, a survey was done among some of the top scientists in the field. Their median prediction was 2055. However, a similar research survey done by Nick Bostrom had different results. Things get even more complicated when you mix time frames with the augmentation of human intelligence. We are currently teaching computers to imitate human thinking. Alternatively, we can create computers that simulate the human brain, or augment human intelligence with artificial intelligence.

Regardless of time frames, many researchers, including Bostrom, Tegmark, and Russell, argue that the problems that we are trying to solve are so complex that we better start with implementing safety into AI right now. In my view, the recent security issues with CPU’s Meltdown & Spectre proved that although some systems undergo rigorous testing, we haven’t mastered computer security yet. We seem miles off from designing systems that remain secure through millions of iterations of self-improvement. The same thing goes for value and goal alignment with superintelligence. How do you align your goals and values with intelligence that is magnitudes smarter than you? And whose values are you trying to implement? Western values, Trump’s values, IS’ values? Luckily, most of the biggest companies, slowly but surely, take their responsibility more seriously now and started initiatives to map out these challenges.

Personally, I believe that while superintelligence is a very real possibility, the more immediate risk is in the nature of AI and automation to augment and enhance human bias and other ‘bugs’ in society. Although malicious use of AI will likely impact society somehow, I think the real risk of AI lies in its impact on the fabric of society. Put differently, it’s more likely that you’ll become unemployed because your job is automated, or that you’ll have to deal with discrimination from badly implemented algorithms than that you’ll be killed by a killer drone.

More about AI and Superintelligence

Want more information on the subject? I wrote two longer posts on Medium about AI Safety Research and the symbiotic relationship between human and AI. I also highly recommend the below talk by Max Tegmark on YouTube. And the books Life 3.0 by Max Tegmark as Superintelligence by Nick Bostrom.

Original article here.